[SHR-U] WiFi-related memory leak
Timo Juhani Lindfors
timo.lindfors at iki.fi
Wed Feb 24 20:20:06 CET 2010
Denis Shulyaka <shulyaka at gmail.com> writes:
> Here it is. Without swap and midori this time:
> ftp://shulyaka.org.ru/pub/memlog2.tar.bz2
Hmm. The memory usage of frameworkd actually decreased during the test:
$ grep frameworkd ps*
ps.17:57:32:root 1219 46.6 16.4 32368 19876 ? Ss 17:31 12:15 python /usr/bin/frameworkd
ps.18:02:34:root 1219 39.1 16.4 32368 19876 ? Ss 17:31 12:16 python /usr/bin/frameworkd
ps.18:07:36:root 1219 34.0 16.2 32368 19628 ? Ss 17:31 12:21 python /usr/bin/frameworkd
ps.18:12:38:root 1219 29.9 16.1 32368 19500 ? Ss 17:31 12:22 python /usr/bin/frameworkd
ps.18:17:40:root 1219 26.6 15.5 32368 18780 ? Ss 17:31 12:22 python /usr/bin/frameworkd
ps.18:22:42:root 1219 24.0 15.4 32368 18676 ? Ss 17:31 12:23 python /usr/bin/frameworkd
ps.18:27:44:root 1219 21.9 14.8 32368 17964 ? Ss 17:31 12:23 python /usr/bin/frameworkd
ps.18:32:46:root 1219 20.1 14.5 32368 17592 ? Ss 17:31 12:23 python /usr/bin/frameworkd
ps.18:37:50:root 1219 18.6 14.3 32368 17376 ? Ss 17:31 12:23 python /usr/bin/frameworkd
ps.18:42:54:root 1219 17.2 14.0 32368 17044 ? Ss 17:31 12:23 python /usr/bin/frameworkd
ps.18:48:00:root 1219 16.1 13.7 32368 16668 ? Ss 17:31 12:23 python /usr/bin/frameworkd
ps.18:53:06:root 1219 15.1 13.5 32368 16384 ? Ds 17:31 12:24 python /usr/bin/frameworkd
ps.18:58:33:root 1219 14.1 13.0 32368 15812 ? Ss 17:31 12:25 python /usr/bin/frameworkd
ps.19:22:18:root 1219 13.5 13.5 32368 16356 ? Rs 17:31 15:02 python /usr/bin/frameworkd
ps.19:27:22:root 1219 17.2 13.5 32368 16356 ? Rs 17:31 20:00 python /usr/bin/frameworkd
ps.19:32:24:root 1219 20.6 13.5 32368 16356 ? Rs 17:31 24:58 python /usr/bin/frameworkd
ps.19:37:26:root 1219 23.6 13.5 32368 16356 ? Rs 17:31 29:52 python /usr/bin/frameworkd
ps.19:42:30:root 1219 26.5 13.5 32368 16356 ? Rs 17:31 34:51 python /usr/bin/frameworkd
ps.19:47:47:root 1219 29.2 13.5 32368 16356 ? Rs 17:31 39:53 python /usr/bin/frameworkd
ps.19:53:12:root 1219 31.0 13.5 32368 16356 ? Rs 17:31 44:01 python /usr/bin/frameworkd
How can this happen? ;)
Same for Xorg too:
$ grep "_ /usr/bin/Xorg" ps*
ps.17:57:32:root 1203 3.9 5.4 10780 6592 tty1 S<s+ 17:31 1:03 \_ /usr/bin/Xorg :0 -pn -nocursor -dpi 280 vt1
ps.18:02:34:root 1203 3.4 5.4 10780 6556 tty1 S<s+ 17:31 1:04 \_ /usr/bin/Xorg :0 -pn -nocursor -dpi 280 vt1
ps.18:07:36:root 1203 3.0 4.9 10780 5948 tty1 S<s+ 17:31 1:06 \_ /usr/bin/Xorg :0 -pn -nocursor -dpi 280 vt1
ps.18:12:38:root 1203 2.7 4.8 10780 5896 tty1 S<s+ 17:31 1:07 \_ /usr/bin/Xorg :0 -pn -nocursor -dpi 280 vt1
ps.18:17:40:root 1203 2.4 4.4 10780 5436 tty1 S<s+ 17:31 1:09 \_ /usr/bin/Xorg :0 -pn -nocursor -dpi 280 vt1
ps.18:22:42:root 1203 2.2 4.4 10780 5432 tty1 S<s+ 17:31 1:10 \_ /usr/bin/Xorg :0 -pn -nocursor -dpi 280 vt1
ps.18:27:44:root 1203 2.0 4.4 10780 5352 tty1 S<s+ 17:31 1:11 \_ /usr/bin/Xorg :0 -pn -nocursor -dpi 280 vt1
ps.18:32:46:root 1203 1.9 4.2 10780 5168 tty1 S<s+ 17:31 1:12 \_ /usr/bin/Xorg :0 -pn -nocursor -dpi 280 vt1
ps.18:37:50:root 1203 1.8 4.3 10780 5224 tty1 S<s+ 17:31 1:14 \_ /usr/bin/Xorg :0 -pn -nocursor -dpi 280 vt1
ps.18:42:54:root 1203 1.7 4.1 10780 5004 tty1 S<s+ 17:31 1:14 \_ /usr/bin/Xorg :0 -pn -nocursor -dpi 280 vt1
ps.18:48:00:root 1203 1.6 3.8 10780 4712 tty1 S<s+ 17:31 1:15 \_ /usr/bin/Xorg :0 -pn -nocursor -dpi 280 vt1
ps.18:53:06:root 1203 1.5 3.6 10780 4388 tty1 S<s+ 17:31 1:16 \_ /usr/bin/Xorg :0 -pn -nocursor -dpi 280 vt1
ps.18:58:33:root 1203 1.5 3.5 10780 4284 tty1 S<s+ 17:31 1:20 \_ /usr/bin/Xorg :0 -pn -nocursor -dpi 280 vt1
ps.19:22:18:root 1203 2.5 3.4 9916 4212 tty1 R<s+ 17:31 2:52 \_ /usr/bin/Xorg :0 -pn -nocursor -dpi 280 vt1
and hal:
$ grep " 1123 " ps*
ps.17:57:32:sshd 1123 0.3 1.8 4512 2200 ? Ss 17:30 0:05 /usr/sbin/hald
ps.18:02:34:sshd 1123 0.3 1.8 4512 2200 ? Ss 17:30 0:06 /usr/sbin/hald
ps.18:07:36:sshd 1123 0.3 1.6 4512 2056 ? Ss 17:30 0:06 /usr/sbin/hald
ps.18:12:38:sshd 1123 0.2 1.6 4512 2056 ? Ss 17:30 0:07 /usr/sbin/hald
ps.18:17:40:sshd 1123 0.2 1.6 4512 1964 ? Ss 17:30 0:07 /usr/sbin/hald
ps.18:22:42:sshd 1123 0.2 1.6 4512 1960 ? Ss 17:30 0:08 /usr/sbin/hald
ps.18:27:44:sshd 1123 0.2 1.5 4512 1920 ? Ss 17:30 0:08 /usr/sbin/hald
ps.18:32:46:sshd 1123 0.2 1.5 4512 1828 ? Ss 17:30 0:09 /usr/sbin/hald
ps.18:37:50:sshd 1123 0.2 1.4 4512 1716 ? Ss 17:30 0:09 /usr/sbin/hald
ps.18:42:54:sshd 1123 0.2 1.3 4512 1692 ? Ss 17:30 0:10 /usr/sbin/hald
ps.18:48:00:sshd 1123 0.2 1.2 4516 1464 ? Ds 17:30 0:11 /usr/sbin/hald
ps.18:53:06:sshd 1123 0.2 1.1 4512 1352 ? Rs 17:30 0:12 /usr/sbin/hald
ps.18:58:33:sshd 1123 0.3 0.9 4512 1176 ? Ss 17:30 0:16 /usr/sbin/hald
(Why is it running as sshd user btw?!)
> The most interesting line in slabtop was:
> OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
> 12278 12278 100% 4.00K 12278 1 49112K size-4096
>
> It was counting all the time the test was run. Don't know what it means though.
I have never used slabtop before. Here when wifi is connected it
displays
Active / Total Objects (% used) : 22154 / 38108 (58.1%)
Active / Total Slabs (% used) : 1356 / 1356 (100.0%)
Active / Total Caches (% used) : 68 / 117 (58.1%)
Active / Total Size (% used) : 3416.10K / 5276.42K (64.7%)
Minimum / Average / Maximum Object : 0.01K / 0.14K / 4096.00K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
8496 1743 20% 0.05K 118 72 472K buffer_head
5520 1830 33% 0.12K 184 30 736K dentry
3864 3824 98% 0.04K 46 84 184K sysfs_dir_cache
3390 3337 98% 0.03K 30 113 120K size-32
2944 2570 87% 0.08K 64 46 256K vm_area_struct
2484 976 39% 0.41K 276 9 1104K ext3_inode_cache
1547 1136 73% 0.28K 119 13 476K radix_tree_node
1356 815 60% 0.01K 4 339 16K anon_vma
1298 1117 86% 0.06K 22 59 88K size-64
1140 900 78% 0.12K 38 30 152K filp
480 457 95% 0.09K 12 40 48K size-96
430 425 98% 0.38K 43 10 172K shmem_inode_cache
384 274 71% 0.31K 32 12 128K proc_inode_cache
360 240 66% 0.09K 9 40 36K cred_jar
330 310 93% 0.12K 11 30 44K size-128
320 282 88% 0.50K 40 8 160K size-512
254 2 0% 0.01K 1 254 4K revoke_table
234 209 89% 0.14K 9 26 36K idr_layer_cache
203 1 0% 0.02K 1 203 4K ip_fib_alias
203 5 2% 0.02K 1 203 4K tcp_bind_bucket
203 2 0% 0.02K 1 203 4K fasync_cache
180 177 98% 0.25K 12 15 48K size-256
169 141 83% 0.29K 13 13 52K inode_cache
140 80 57% 0.19K 7 20 28K skbuff_head_cache
120 116 96% 0.09K 3 40 12K kmem_cache
120 98 81% 1.00K 30 4 120K size-1024
118 118 100% 0.06K 2 59 8K pid
113 95 84% 0.03K 1 113 4K fs_cache
113 2 1% 0.03K 1 113 4K dma_desc
113 5 4% 0.03K 1 113 4K fib6_nodes
101 13 12% 0.04K 1 101 4K ip_fib_hash
100 93 93% 0.19K 5 20 20K size-192
92 25 27% 0.04K 1 92 4K inotify_watch_cache
88 80 90% 0.34K 8 11 32K sock_inode_cache
84 74 88% 2.84K 42 2 336K task_struct
84 3 3% 0.04K 1 84 4K blkdev_ioc
81 81 100% 0.44K 9 9 36K signal_cache
78 78 100% 1.28K 26 3 104K sighand_cache
76 76 100% 2.00K 38 2 152K size-2048
70 70 100% 0.38K 7 10 28K mm_struct
63 4 6% 0.06K 1 63 4K journal_head
60 60 100% 0.19K 3 20 12K files_cache
60 52 86% 0.38K 6 10 24K UNIX
59 1 1% 0.06K 1 59 4K inet_peer_cache
59 2 3% 0.06K 1 59 4K uid_cache
38 38 100% 4.00K 38 1 152K size-4096
30 24 80% 0.12K 1 30 4K mnt_cache
30 2 6% 0.12K 1 30 4K bio-0
30 4 13% 0.12K 1 30 4K arp_cache
27 2 7% 0.14K 1 27 4K sigqueue
24 4 16% 0.16K 1 24 4K ndisc_cache
18 8 44% 0.21K 1 18 4K blkdev_requests
18 18 100% 1.20K 6 3 24K blkdev_queue
15 8 53% 0.25K 1 15 4K ip_dst_cache
15 7 46% 0.25K 1 15 4K ip6_dst_cache
14 6 42% 1.06K 2 7 16K TCP
10 5 50% 0.38K 1 10 4K skbuff_fclone_cache
9 4 44% 0.41K 1 9 4K bdev_cache
9 1 11% 0.41K 1 9 4K ext2_inode_cache
8 1 12% 0.47K 1 8 4K UDP
8 2 25% 0.47K 1 8 4K RAW
6 4 66% 0.62K 1 6 4K RAWv6
5 5 100% 4.00K 5 1 20K names_cache
3 3 100% 8.00K 3 1 24K size-8192
3 3 100% 16.00K 3 1 48K size-16384
3 2 66% 1.19K 1 3 4K TCPv6
2 2 100% 32.00K 2 1 64K size-32768
2 2 100% 3.00K 1 2 8K biovec-256
0 0 0% 64.00K 0 1 0K size-65536
0 0 0% 128.00K 0 1 0K size-131072
More information about the community
mailing list