Download presentation
Presentation is loading. Please wait.
Published by番 松 Modified over 7 years ago
1
Where'd all my memory go? Joshua Miller SCALE 12x – 22 FEB 2014
2
Computers have memory, which they use to run applications.
The Incomplete Story Computers have memory, which they use to run applications.
3
Cruel Reality swap caches buffers shared virtual resident more...
4
Topics Memory basics Paging, swapping, caches, buffers Overcommit
Filesystem cache Kernel caches and buffers Shared memory
5
top is awesome top - 15:57:33 up 131 days, 8:02, 3 users, load average: 0.00, 0.00, 0.00 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.3%sy, 0.3%ni, 99.0%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8131 root m 50m 3748 S :51.97 chef-client 8153 root m 19m 7840 S :35.48 sssd_be 8154 root m 15m 14m S :08.03 sssd_nss 7767 root S :39 munin-asyncd 7511 root m S :06.29 munin-node 3379 root m S :20.28 snmpd 7026 root m S :00.02 sshd
6
top is awesome Physical memory used and free Swap used and free
top - 15:57:33 up 131 days, 8:02, 3 users, load average: 0.00, 0.00, 0.00 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.3%sy, 0.3%ni, 99.0%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8131 root m 50m 3748 S :51.97 chef-client 8153 root m 19m 7840 S :35.48 sssd_be 8154 root m 15m 14m S :08.03 sssd_nss 7767 root S :39 munin-asyncd 7511 root m S :06.29 munin-node 3379 root m S :20.28 snmpd 7026 root m S :00.02 sshd Physical memory used and free Swap used and free
7
top is awesome Percentage of RES/total memory
top - 15:57:33 up 131 days, 8:02, 3 users, load average: 0.00, 0.00, 0.00 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.3%sy, 0.3%ni, 99.0%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8131 root m 50m 3748 S :51.97 chef-client 8153 root m 19m 7840 S :35.48 sssd_be 8154 root m 15m 14m S :08.03 sssd_nss 7767 root S :39 munin-asyncd 7511 root m S :06.29 munin-node 3379 root m S :20.28 snmpd 7026 root m S :00.02 sshd Percentage of RES/total memory Per-process breakdown of virtual, resident, and shared memory
8
top is awesome Kernel buffers and caches (no association with swap,
top - 15:57:33 up 131 days, 8:02, 3 users, load average: 0.00, 0.00, 0.00 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.3%sy, 0.3%ni, 99.0%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8131 root m 50m 3748 S :51.97 chef-client 8153 root m 19m 7840 S :35.48 sssd_be 8154 root m 15m 14m S :08.03 sssd_nss 7767 root S :39 munin-asyncd 7511 root m S :06.29 munin-node 3379 root m S :20.28 snmpd 7026 root m S :00.02 sshd Kernel buffers and caches (no association with swap, despite being on the same row)
9
/proc/meminfo [jmiller@meminfo]$ cat /proc/meminfo
MemTotal: kB MemFree: kB Buffers: kB Cached: kB SwapCached: kB ...
10
/proc/meminfo [jmiller@meminfo]$ cat /proc/meminfo
MemTotal: kB MemFree: kB Buffers: kB Cached: kB SwapCached: kB ... Many useful values which we'll refer to throughout the presentation
11
Overcommit top - 14:57:44 up 137 days, 7:02, 6 users, load average: 0.03, 0.02, 0.00 Tasks: 141 total, 1 running, 140 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, %st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22385 jmiller g S :00.00 bloat
12
Overcommit top - 14:57:44 up 137 days, 7:02, 6 users, load average: 0.03, 0.02, 0.00 Tasks: 141 total, 1 running, 140 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, %st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22385 jmiller g S :00.00 bloat 4G of physical memory and no swap , so how can “bloat” have 18.6g virtual?
13
Virtual memory is not “physical memory plus swap”
Overcommit top - 14:57:44 up 137 days, 7:02, 6 users, load average: 0.03, 0.02, 0.00 Tasks: 141 total, 1 running, 140 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, %st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22385 jmiller g S :00.00 bloat 4G of physical memory and no swap , so how can “bloat” have 18.6g virtual? Virtual memory is not “physical memory plus swap” A process can request huge amounts of memory, but it isn't mapped to “real memory” until actually referenced
14
Linux filesystem caching
Free memory is used to cache filesystem contents. Over time systems can appear to be out of memory because all of the free memory is used for cache.
15
top is awesome About 25% of this system's memory is from page cache
top - 15:57:33 up 131 days, 8:02, 3 users, load average: 0.00, 0.00, 0.00 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.3%sy, 0.3%ni, 99.0%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8131 root m 50m 3748 S :51.97 chef-client 8153 root m 19m 7840 S :35.48 sssd_be 8154 root m 15m 14m S :08.03 sssd_nss 7767 root S :39 munin-asyncd 7511 root m S :06.29 munin-node 3379 root m S :20.28 snmpd 7026 root m S :00.02 sshd About 25% of this system's memory is from page cache
16
Linux filesystem caching
Additions and removals from the cache are transparent to applications Tunable through swappiness Can be dropped - echo 1 > /proc/sys/vm/drop_caches Under memory pressure, memory is freed automatically* *usually
17
Where'd my memory go? top - 16:40:53 up 137 days, 8:45, 5 users, load average: 0.88, 0.82, 0.46 Tasks: 138 total, 1 running, 137 sleeping, 0 stopped, 0 zombie Cpu0 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, %st Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, %st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28285 root m 17m 6128 S :39.42 sssd_be 7767 root S :37 munin-asyncd 7511 root m S :56.68 munin-node 3379 root m S :31.44 snmpd
18
Where'd my memory go? 1.5G used
top - 16:40:53 up 137 days, 8:45, 5 users, load average: 0.88, 0.82, 0.46 Tasks: 138 total, 1 running, 137 sleeping, 0 stopped, 0 zombie Cpu0 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, %st Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, %st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28285 root m 17m 6128 S :39.42 sssd_be 7767 root S :37 munin-asyncd 7511 root m S :56.68 munin-node 3379 root m S :31.44 snmpd 1.5G used
19
Where'd my memory go? ... 1.5G used - 106MB RSS
top - 16:40:53 up 137 days, 8:45, 5 users, load average: 0.88, 0.82, 0.46 Tasks: 138 total, 1 running, 137 sleeping, 0 stopped, 0 zombie Cpu0 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, %st Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, %st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28285 root m 17m 6128 S :39.42 sssd_be 7767 root S :37 munin-asyncd 7511 root m S :56.68 munin-node 3379 root m S :31.44 snmpd ... 1.5G used - 106MB RSS
20
Where'd my memory go? ... 1.5G used - 106MB RSS - 345MB cache -
top - 16:40:53 up 137 days, 8:45, 5 users, load average: 0.88, 0.82, 0.46 Tasks: 138 total, 1 running, 137 sleeping, 0 stopped, 0 zombie Cpu0 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, %st Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, %st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28285 root m 17m 6128 S :39.42 sssd_be 7767 root S :37 munin-asyncd 7511 root m S :56.68 munin-node 3379 root m S :31.44 snmpd ... 1.5G used - 106MB RSS - 345MB cache - 25MB buffer
21
Where'd my memory go? ... 1.5G used - 106MB RSS - 345MB cache -
top - 16:40:53 up 137 days, 8:45, 5 users, load average: 0.88, 0.82, 0.46 Tasks: 138 total, 1 running, 137 sleeping, 0 stopped, 0 zombie Cpu0 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, %st Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, %st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28285 root m 17m 6128 S :39.42 sssd_be 7767 root S :37 munin-asyncd 7511 root m S :56.68 munin-node 3379 root m S :31.44 snmpd ... 1.5G used - 106MB RSS - 345MB cache - 25MB buffer = ~1GB mystery What is consuming a GB of memory?
22
kernel slab cache The kernel uses free memory for its own caches.
Some include: dentries (directory cache) inodes buffers
23
kernel slab cache [jmiller@mem-mystery ~]$ slabtop -o -s c
Active / Total Objects (% used) : / (99.7%) Active / Total Slabs (% used) : / (100.0%) Active / Total Caches (% used) : 104 / 187 (55.6%) Active / Total Size (% used) : K / K (99.9%) Minimum / Average / Maximum Object : 0.02K / 0.34K / K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME % K K nfs_inode_cache % K K dentry % K K size-64 % K K size-32 % K K kmem_cache % K K inode_cache % K K vm_area_struct % K K radix_tree_node
24
kernel slab cache 1057MB of kernel slab cache
~]$ slabtop -o -s c Active / Total Objects (% used) : / (99.7%) Active / Total Slabs (% used) : / (100.0%) Active / Total Caches (% used) : 104 / 187 (55.6%) Active / Total Size (% used) : K / K (99.9%) Minimum / Average / Maximum Object : 0.02K / 0.34K / K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME % K K nfs_inode_cache % K K dentry % K K size-64 % K K size-32 % K K kmem_cache % K K inode_cache % K K vm_area_struct % K K radix_tree_node 1057MB of kernel slab cache
25
Where'd my memory go? ... 1.5G used - 106MB RSS - 345MB cache -
top - 16:40:53 up 137 days, 8:45, 5 users, load average: 0.88, 0.82, 0.46 Tasks: 138 total, 1 running, 137 sleeping, 0 stopped, 0 zombie Cpu0 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, %st Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, %st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28285 root m 17m 6128 S :39.42 sssd_be 7767 root S :37 munin-asyncd 7511 root m S :56.68 munin-node 3379 root m S :31.44 snmpd ... 1.5G used - 106MB RSS - 345MB cache - 25MB buffer = ~1GB mystery What is consuming a GB of memory?
26
Where'd my memory go? ... 1.5G used - 106MB RSS - 345MB cache -
top - 16:40:53 up 137 days, 8:45, 5 users, load average: 0.88, 0.82, 0.46 Tasks: 138 total, 1 running, 137 sleeping, 0 stopped, 0 zombie Cpu0 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, %st Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, %st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28285 root m 17m 6128 S :39.42 sssd_be 7767 root S :37 munin-asyncd 7511 root m S :56.68 munin-node 3379 root m S :31.44 snmpd ... 1.5G used - 106MB RSS - 345MB cache - 25MB buffer = ~1GB mystery What is consuming a GB of memory? Answer: kernel slab cache → MB
27
kernel slab cache Additions and removals from the cache are transparent to applications Tunable through procs vfs_cache_pressure Under memory pressure, memory is freed automatically* *usually
28
kernel slab cache network buffers example
~]$ slabtop -s c -o Active / Total Objects (% used) : / (99.4%) Active / Total Slabs (% used) : / (100.0%) Active / Total Caches (% used) : 106 / 188 (56.4%) Active / Total Size (% used) : K / K (99.8%) Minimum / Average / Maximum Object : 0.02K / 0.55K / K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME % K K size-1024 % K K skbuff_head_cache % K K size-64
29
kernel slab cache network buffers example
~]$ slabtop -s c -o Active / Total Objects (% used) : / (99.4%) Active / Total Slabs (% used) : / (100.0%) Active / Total Caches (% used) : 106 / 188 (56.4%) Active / Total Size (% used) : K / K (99.8%) Minimum / Average / Maximum Object : 0.02K / 0.55K / K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME % K K size-1024 % K K skbuff_head_cache % K K size-64 ~1.5G used , this time for in-use network buffers (SO_RCVBUF)
30
Unreclaimable slab ~]$ grep -A 2 ^Slab /proc/meminfo Slab: kB SReclaimable: kB SUnreclaim: kB
31
Unreclaimable slab ~]$ grep -A 2 ^Slab /proc/meminfo Slab: kB SReclaimable: kB SUnreclaim: kB Some slab objects can't be reclaimed, and memory pressure won't automatically free the resources
32
Nitpick Accounting Now we can account for all memory utilization:
~]$ ./memory_explain.sh "free" buffers (MB) : 277 "free" caches (MB) : 4650 "slabtop" memory (MB) : "ps" resident process memory (MB) : "free" used memory (MB) : 5291 buffers+caches+slab+rss (MB) : difference (MB) :
33
Nitpick Accounting Now we can account for all memory utilization:
~]$ ./memory_explain.sh "free" buffers (MB) : 277 "free" caches (MB) : 4650 "slabtop" memory (MB) : "ps" resident process memory (MB) : "free" used memory (MB) : 5291 buffers+caches+slab+rss (MB) : difference (MB) : But sometimes we're using more memory than we're using?!
34
And a cache complication...
top - 12:37:01 up 66 days, 23:38, 3 users, load average: 0.08, 0.02, 0.01 Tasks: 188 total, 1 running, 187 sleeping, 0 stopped, 0 zombie Cpu(s): 0.3%us, 0.6%sy, 0.0%ni, 98.9%id, 0.1%wa, 0.0%hi, 0.1%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2189 postgres m 2.8g 2.8g S :09.20 postgres
35
And a cache complication...
top - 12:37:01 up 66 days, 23:38, 3 users, load average: 0.08, 0.02, 0.01 Tasks: 188 total, 1 running, 187 sleeping, 0 stopped, 0 zombie Cpu(s): 0.3%us, 0.6%sy, 0.0%ni, 98.9%id, 0.1%wa, 0.0%hi, 0.1%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2189 postgres m 2.8g 2.8g S :09.20 postgres ~7G used
36
And a cache complication...
top - 12:37:01 up 66 days, 23:38, 3 users, load average: 0.08, 0.02, 0.01 Tasks: 188 total, 1 running, 187 sleeping, 0 stopped, 0 zombie Cpu(s): 0.3%us, 0.6%sy, 0.0%ni, 98.9%id, 0.1%wa, 0.0%hi, 0.1%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2189 postgres m 2.8g 2.8g S :09.20 postgres ~7G used , ~6G cached ,
37
And a cache complication...
top - 12:37:01 up 66 days, 23:38, 3 users, load average: 0.08, 0.02, 0.01 Tasks: 188 total, 1 running, 187 sleeping, 0 stopped, 0 zombie Cpu(s): 0.3%us, 0.6%sy, 0.0%ni, 98.9%id, 0.1%wa, 0.0%hi, 0.1%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2189 postgres m 2.8g 2.8g S :09.20 postgres ~7G used , ~6G cached , so how can postgres have 2.8G resident?
38
Shared memory Pages that multiple processes can access
Resident, shared, and in the page cache Not subject to cache flush shmget() mmap()
39
Shared memory shmget() example
40
Shared memory shmget()
top - 21:08:20 up 147 days, 13:12, 9 users, load average: 0.03, 0.04, 0.00 Tasks: 150 total, 1 running, 149 sleeping, 0 stopped, 0 zombie Cpu(s): 0.3%us, 1.5%sy, 0.4%ni, 96.7%id, 1.2%wa, 0.0%hi, 0.0%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 20599 jmiller m 881m 881m S :06.52 share
41
Shared memory shmget()
top - 21:08:20 up 147 days, 13:12, 9 users, load average: 0.03, 0.04, 0.00 Tasks: 150 total, 1 running, 149 sleeping, 0 stopped, 0 zombie Cpu(s): 0.3%us, 1.5%sy, 0.4%ni, 96.7%id, 1.2%wa, 0.0%hi, 0.0%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 20599 jmiller m 881m 881m S :06.52 share Shared memory is in the page cache!
42
Shared memory shmget()
top - 21:21:29 up 147 days, 13:25, 9 users, load average: 0.34, 0.18, 0.06 Tasks: 151 total, 1 running, 150 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.6%sy, 0.4%ni, 98.9%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22058 jmiller m 881m 881m S :05.00 share 22059 jmiller m 881m 881m S :03.35 share 22060 jmiller m 881m 881m S :03.40 share 3x processes, but same resource utilization - about 1GB
43
Shared memory shmget()
top - 21:21:29 up 147 days, 13:25, 9 users, load average: 0.34, 0.18, 0.06 Tasks: 151 total, 1 running, 150 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.6%sy, 0.4%ni, 98.9%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22058 jmiller m 881m 881m S :05.00 share 22059 jmiller m 881m 881m S :03.35 share 22060 jmiller m 881m 881m S :03.40 share From /proc/meminfo: Mapped: kB Shmem: kB
44
Shared memory mmap() example
45
Shared memory mmap() From /proc/meminfo: Mapped: 1380664 kB
top - 21:46:04 up 147 days, 13:50, 10 users, load average: 0.24, 0.21, 0.11 Tasks: 152 total, 1 running, 151 sleeping, 0 stopped, 0 zombie Cpu(s): 0.3%us, 1.6%sy, 0.2%ni, 94.9%id, 3.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24569 jmiller m 1.3g 1.3g S :03.04 mapped From /proc/meminfo: Mapped: kB Shmem: kB
46
Shared memory mmap() From /proc/meminfo: Mapped: 1380664 kB
top - 21:48:06 up 147 days, 13:52, 10 users, load average: 0.21, 0.18, 0.10 Tasks: 154 total, 1 running, 153 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.7%sy, 0.2%ni, 98.8%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24592 jmiller m 1.3g 1.3g S :01.26 mapped 24586 jmiller m 1.3g 1.3g S :01.28 mapped 24599 jmiller m 1.3g 1.3g S :01.29 mapped From /proc/meminfo: Mapped: kB Shmem: kB
47
Shared memory mmap() From /proc/meminfo: Mapped: 1380664 kB
top - 21:48:06 up 147 days, 13:52, 10 users, load average: 0.21, 0.18, 0.10 Tasks: 154 total, 1 running, 153 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.7%sy, 0.2%ni, 98.8%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24592 jmiller m 1.3g 1.3g S :01.26 mapped 24586 jmiller m 1.3g 1.3g S :01.28 mapped 24599 jmiller m 1.3g 1.3g S :01.29 mapped Not counted as shared, but mapped From /proc/meminfo: Mapped: kB Shmem: kB
48
Shared memory mmap() From /proc/meminfo: Mapped: 1380664 kB
top - 21:48:06 up 147 days, 13:52, 10 users, load average: 0.21, 0.18, 0.10 Tasks: 154 total, 1 running, 153 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.7%sy, 0.2%ni, 98.8%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24592 jmiller m 1.3g 1.3g S :01.26 mapped 24586 jmiller m 1.3g 1.3g S :01.28 mapped 24599 jmiller m 1.3g 1.3g S :01.29 mapped 105%! From /proc/meminfo: Mapped: kB Shmem: kB
49
A subtle difference between shmget() and mmap()...
50
Locked shared memory Memory from shmget() must be explicitly released by a shmctl(..., IPC_RMID, …) call Process termination doesn't free the memory Not the case for mmap()
51
Locked shared memory shmget()
top - 11:36:35 up 151 days, 3:41, 3 users, load average: 0.09, 0.10, 0.03 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.4%sy, 0.4%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24376 root m 60m 3724 S :35.84 chef-client 24399 root m 15m 14m S :03.22 sssd_nss 7767 root S :38 munin-asyncd ~900M of cache
52
Locked shared memory shmget()
top - 11:36:35 up 151 days, 3:41, 3 users, load average: 0.09, 0.10, 0.03 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.4%sy, 0.4%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24376 root m 60m 3724 S :35.84 chef-client 24399 root m 15m 14m S :03.22 sssd_nss 7767 root S :38 munin-asyncd 'echo 3 > /proc/sys/vm/drop_caches' – no impact on value of cache, so it's not filesystem caching
53
Locked shared memory shmget()
top - 11:36:35 up 151 days, 3:41, 3 users, load average: 0.09, 0.10, 0.03 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.4%sy, 0.4%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24376 root m 60m 3724 S :35.84 chef-client 24399 root m 15m 14m S :03.22 sssd_nss 7767 root S :38 munin-asyncd Processes consuming way less than ~900M
54
Locked shared memory shmget()
top - 11:36:35 up 151 days, 3:41, 3 users, load average: 0.09, 0.10, 0.03 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.4%sy, 0.4%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24376 root m 60m 3724 S :35.84 chef-client 24399 root m 15m 14m S :03.22 sssd_nss 7767 root S :38 munin-asyncd From /proc/meminfo: Mapped: kB Shmem: kB
55
Locked shared memory shmget()
top - 11:36:35 up 151 days, 3:41, 3 users, load average: 0.09, 0.10, 0.03 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.4%sy, 0.4%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24376 root m 60m 3724 S :35.84 chef-client 24399 root m 15m 14m S :03.22 sssd_nss 7767 root S :38 munin-asyncd Un-attached shared mem segment(s) From /proc/meminfo: Mapped: kB Shmem: kB
56
Locked shared memory shmget()
top - 11:36:35 up 151 days, 3:41, 3 users, load average: 0.09, 0.10, 0.03 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.4%sy, 0.4%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: k total, k used, k free, k buffers Swap: k total, k used, k free, k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24376 root m 60m 3724 S :35.84 chef-client 24399 root m 15m 14m S :03.22 sssd_nss 7767 root S :38 munin-asyncd Observable through 'ipcs -a' From /proc/meminfo: Mapped: kB Shmem: kB
57
Accounting for shared memory is difficult
top reports memory that can be shared – but might not be ps doesn't account for shared pmap splits mapped vs shared, reports allocated vs used mmap'd files are shared, until modified → at which point they're private
58
Linux filesystem cache
What's inside? Do you need it? ? detritus? /etc/motd? Important app data?
59
Linux filesystem cache
We know shared memory is in the page cache, which we can largely understand through proc From /proc/meminfo: Cached: kB ... Mapped: kB Shmem: kB
60
Linux filesystem cache
We know shared memory is in the page cache, which we can largely understand through proc From /proc/meminfo: Cached: kB ... Mapped: kB Shmem: kB But what about the rest of what's in the cache?
61
Linux filesystem cache
Bad news: We can't just ask “What's in the cache?” Good news: We can ask “Is this file in the cache?”
62
linux-ftools https://code.google.com/p/linux-ftools/
~]$ linux-fincore /tmp/big filename size cached_pages cached_size cached_perc /tmp/big 4,194, --- total cached size: 0
63
linux-ftools https://code.google.com/p/linux-ftools/
~]$ linux-fincore /tmp/big filename size cached_pages cached_size cached_perc /tmp/big 4,194, --- total cached size: 0 Zero % cached
64
linux-ftools https://code.google.com/p/linux-ftools/
~]$ linux-fincore /tmp/big filename size cached_pages cached_size cached_perc /tmp/big 4,194, --- total cached size: 0 ~]$ dd if=/tmp/big of=/dev/null bs=1k count=50 Read ~5%
65
linux-ftools https://code.google.com/p/linux-ftools/
~]$ linux-fincore /tmp/big filename size cached_pages cached_size cached_perc /tmp/big 4,194, --- total cached size: 0 ~]$ dd if=/tmp/big of=/dev/null bs=1k count=50 /tmp/big 4,194, , total cached size: 245,760 ~5% cached
66
system tap – cache hits https://sourceware
~]$ sudo stap /tmp/cachehit.stap Cache Reads (KB) Disk Reads (KB) Miss Rate Hit Rate % % % % % % % % % % % % % % % %
67
Track reads against VFS, reads against disk, then infer cache hits
system tap – cache hits ~]$ sudo stap /tmp/cachehit.stap Cache Reads (KB) Disk Reads (KB) Miss Rate Hit Rate % % % % % % % % % % % % % % % % Track reads against VFS, reads against disk, then infer cache hits
68
But – have to account for LVM, device mapper, remote disk
system tap – cache hits ~]$ sudo stap /tmp/cachehit.stap Cache Reads (KB) Disk Reads (KB) Miss Rate Hit Rate % % % % % % % % % % % % % % % % But – have to account for LVM, device mapper, remote disk devices (NFS, iSCSI ), ...
69
Easy mode - drop_caches
echo 1 | sudo tee /proc/sys/vm/drop_caches frees clean cache pages immediately frequently accessed files should be re-cached quickly performance impact while caches repopulated
70
Filesystem cache contents
No ability to easily see full contents of cache mincore() - but have to check every file Hard - system tap / dtrace inference Easy – drop_caches and observe impact
71
Memory: The Big Picture Virtual memory Swap Physical memory
72
Physical Memory
73
Physical Memory Free
74
Physical Memory Used Free
75
Physical Memory Used Private application memory Free
76
Physical Memory Used Kernel caches (SLAB) Private application memory
Free
77
Physical Memory Used Kernel caches (SLAB) Buffer cache (block IO)
Private application memory Free
78
Physical Memory Used Kernel caches (SLAB) Buffer cache (block IO)
Private application memory Page cache Free
79
Physical Memory Used Kernel caches (SLAB) Buffer cache (block IO)
Private application memory Page cache Filesystem cache Free
80
Physical Memory Used Kernel caches (SLAB) Buffer cache (block IO)
Private application memory Page cache Shared memory Filesystem cache Free
81
Physical Memory Used Kernel caches (SLAB) Buffer cache (block IO)
Private application memory Page cache Shared memory Filesystem cache Free
82
Thanks! Send feedback to me: joshuamiller01 on gmail
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.