Here a short explain for /proc/meminfo values.
MemTotal – Total amount of physical RAM, in kilobytes.
MemFree – The amount of physical RAM, in kilobytes, left unused by the system.
Buffers – The amount of physical RAM, in kilobytes, used for file buffers.
Cached – The amount of physical RAM, in kilobytes, used as cache memory.
Any time you do a read from a file on disk, that data is read into memory, and goes into the page cache. However, if you do a second read of the same area in a file, the data will be read directly from memory and no disk access will be taken.
Mapped – The total amount of memory, in kilobytes, which have been used to map devices, files, or libraries using the mmap command.
The first time the mmap area is accessed, the page will be brought in from the disk and mapped into memory. The kernel keeps the page mapped into memory and bets that you will soon access it again. It might be assumed that mmap memory is not “cached” because it is in active use and that “cached” means “completely unused right now”. However, Linux does not define it that way. The Linux definition of “cached” is closer to “this is a copy of data from the disk that we have here to save you time”. It implies nothing about how the page is actually being used. This is why we have both “Cached:” and “Mapped:” in meminfo. All “Mapped:” memory is “Cached:”, but not all “Cached:” memory is “Mapped:”.
SwapTotal – The total amount of swap available, in kilobytes.
SwapFree – The total amount of swap free, in kilobytes.
When the kernel decides not to get memory from any of the other sources we’ve described so far, it starts to swap. During this process it takes user application data and writes it to a special place (or places) on the disk. The kernel will often choose to swap rarely used memory page in favor of the current needs of the currently running applications. For this reason, even a system with vast amounts of RAM (even when properly tuned) can swap. There are lots of pages of memory which are user application data, but are rarely used. All of these are targets for being swapped in favor of other uses for the RAM.
But, if the mere presence of used swap is not evidence of a system which has too little RAM for its workload, what is? As you can see, swap is most efficiently used for data which will not be accessed for a long time. If data in swap is being constantly accessed, then it is failing to be used effectively. We can monitor the amount of data going in and out of swap with the vmstat command.
The following will produce output every 5 seconds:
~$ vmstat 5
procs ———–memory———- —swap– —–io—- -system– —-cpu—-
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 0 363776 73088 277444 0 0 7 9 381 764 4 7 89 0
0 0 0 360156 73096 277468 0 0 0 48 535 820 2 7 91 0
0 0 0 355592 73104 277744 0 0 0 14 1065 1805 6 9 85 0
1 0 0 352920 73120 277852 0 0 0 54 405 654 3 3 94 0
0 0 0 351988 73128 277884 0 0 0 8 756 1653 2 7 91 0
1 0 0 350732 73136 277884 0 0 0 50 297 395 1 2 97 0
1 0 0 349736 73204 281728 0 0 728 10 1640 2844 7 16 74 3
0 0 0 351704 73280 282712 0 0 182 157 598 1284 3 6 91 1
0 0 0 349132 73288 282744 0 0 0 32 587 1335 6 8 87 0
0 0 0 351176 73304 282744 0 0 0 146 612 1033 2 6 92 0
3 0 0 343408 73600 289316 0 0 1366 110 984 1787 8 13 67 12
1 0 0 339160 73760 293396 0 0 835 104 988 2728 9 18 67 5
The columns we are most interested in are “si” and “so” which are abbreviations for “swap in” and “swap out”. You can interpret them this way:
- A small “swap in” and “swap out” is normal, and indicates that there is little need for application data which is currently in swap, and that any new memory needs are being dealt with with means other than swapping application data.
- A large “swap in” with a small “swap out” is usually indicative that a swapped out application is now starting to run again, and needs to get its data back from the disk.
- A large “swap out” with a small “swap in” is usually indicative that an application is in need of some kind RAM (could be any of the caches, or application data), and is swapping out old application data in order to get that RAM.
- A large “swap out” with a large “swap in” is generally the condition which you want to avoid. It means that the system is “thrashing” or that it is needing new RAM as fast as it can swap out application data. This often means that the application needing the RAM has forced out all of the truly old data and has started to force data in active use out to swap. Those “active use data” will be immediately read back in from swap, causing both “swap in” and “swap out” to be elevated, and roughly equal.
SwapCached – The amount of swap, in kilobytes, used as cache memory.
The swap cache is very similar in concept to the page cache. A page of user application data written to disk is very similar to a page of file data on the disk. Any time a page is read in from swap (“si” in vmstat), it is placed in the swap cache. Just like the page cache. It is betting that we might need to swap this page out again. If that need arises, we can detect that there is already a copy on the disk and simply throw the page in memory away immediately. This saves us the cost of re-writing the page to the disk.
The swap cache is really only useful when we are reading data from swap and never writing to it. If we write to the page, the copy on the disk is no longer in sync with the copy in memory. If this happens, we have to write to the disk to swap the page out again, just like we did the first time. However, the cost of saving any writes to disk is great, and even with only a small portion of the swap cache ever written to, the system will perform better.
Dirty – The total amount of memory, in kilobytes, waiting to be written back to the disk.
Another operation that occurs when we start to run out of memory is the writing of dirty data to disk. Dirty data is page cache to which a write has occurred. Before we can free that page cache, we must first update the original copy on disk with the data from the write. As the system dips below its min_free_kbytes value, the system will attempt to free page cache. When freeing page cache, it is very common to find such dirty pages and the kernel will initiate these writes whenever it finds these pages. You can see this happening when “Dirty:” decreases at the same time as “bo” (Blocks written Out) from vmstat goes up.
The kernel may request that many pages be written to the disk in parallel. This speeds disk operations up by batching them together, or spanning them across several disks. When the kernel is actively trying to update on-disk data for a page, it will increment the meminfo “Writeback:” entry for that page.
The “sync” command will force all dirty data to be written out and “Dirty:” to drop to a very low value momentarily.
Active – The total amount of buffer or page cache memory, in kilobytes, that is in active use. This is memory that has been recently used and is usually not reclaimed for other purposes.
Inactive – The total amount of buffer or page cache memory, in kilobytes, that are free and available. This is memory that has not been recently used and can be reclaimed for other purposes.
HighTotal and HighFree – The total and free amount of memory, in kilobytes, that is not directly mapped into kernel space. The HighTotal value can vary based on the type of kernel used.
LowTotal and LowFree – The total and free amount of memory, in kilobytes, that is directly mapped into kernel space. The LowTotal value can vary based on the type of kernel used.
Writeback – The total amount of memory, in kilobytes, actively being written back to the disk.
Slab – The total amount of memory, in kilobytes, used by the kernel to cache data structures for its own use.
Committed_AS – The total amount of memory, in kilobytes, estimated to complete the workload. This value represents the worst case scenario value, and also includes swap memory.
PageTables – The total amount of memory, in kilobytes, dedicated to the lowest page table level.
VMallocTotal – The total amount of memory, in kilobytes, of total allocated virtual address space.
VMallocUsed – The total amount of memory, in kilobytes, of used virtual address space.
VMallocChunk – The largest contiguous block of memory, in kilobytes, of available virtual address space.
HugePages_Total – The total number of hugepages for the system. The number is derived by dividing Hugepagesize by the megabytes set aside for hugepages specified in /proc/sys/vm/hugetlb_pool. This statistic only appears on the x86, Itanium, and AMD64 architectures.
HugePages_Free – The total number of hugepages available for the system. This statistic only appears on the x86, Itanium, and AMD64 architectures.
Hugepagesize – The size for each hugepages unit in kilobytes. By default, the value is 4096 KB on uniprocessor kernels for 32 bit architectures. For SMP, hugemem kernels, and AMD64, the default is 2048 KB. For Itanium architectures, the default is 262144 KB. This statistic only appears on the x86, Itanium, and AMD64 architectures.