2008-03-04 22:28:39 +00:00
|
|
|
Memory Resource Controller
|
|
|
|
|
2011-06-15 20:00:47 +00:00
|
|
|
NOTE: The Memory Resource Controller has generically been referred to as the
|
|
|
|
memory controller in this document. Do not confuse memory controller
|
|
|
|
used here with the memory controller that is used in hardware.
|
2008-02-07 08:13:46 +00:00
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
(For editors)
|
|
|
|
In this document:
|
|
|
|
When we mention a cgroup (cgroupfs's directory) with memory controller,
|
|
|
|
we call it "memory cgroup". When you see git-log and source code, you'll
|
|
|
|
see patch's title and function names tend to use "memcg".
|
|
|
|
In this document, we avoid using it.
|
2008-02-07 08:13:46 +00:00
|
|
|
|
|
|
|
Benefits and Purpose of the memory controller
|
|
|
|
|
|
|
|
The memory controller isolates the memory behaviour of a group of tasks
|
|
|
|
from the rest of the system. The article on LWN [12] mentions some probable
|
|
|
|
uses of the memory controller. The memory controller can be used to
|
|
|
|
|
|
|
|
a. Isolate an application or a group of applications
|
|
|
|
Memory hungry applications can be isolated and limited to a smaller
|
|
|
|
amount of memory.
|
|
|
|
b. Create a cgroup with limited amount of memory, this can be used
|
|
|
|
as a good alternative to booting with mem=XXXX.
|
|
|
|
c. Virtualization solutions can control the amount of memory they want
|
|
|
|
to assign to a virtual machine instance.
|
|
|
|
d. A CD/DVD burner could control the amount of memory used by the
|
|
|
|
rest of the system to ensure that burning does not fail due to lack
|
|
|
|
of available memory.
|
|
|
|
e. There are several other use cases, find one or use the controller just
|
|
|
|
for fun (to learn and hack on the VM subsystem).
|
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
Current Status: linux-2.6.34-mmotm(development version of 2010/April)
|
|
|
|
|
|
|
|
Features:
|
|
|
|
- accounting anonymous pages, file caches, swap caches usage and limiting them.
|
|
|
|
- private LRU and reclaim routine. (system's global LRU and private LRU
|
|
|
|
work independently from each other)
|
|
|
|
- optionally, memory+swap usage can be accounted and limited.
|
|
|
|
- hierarchical accounting
|
|
|
|
- soft limit
|
|
|
|
- moving(recharging) account at moving a task is selectable.
|
|
|
|
- usage threshold notifier
|
|
|
|
- oom-killer disable knob and oom-notifier
|
|
|
|
- Root cgroup has no limit controls.
|
|
|
|
|
|
|
|
Kernel memory and Hugepages are not under control yet. We just manage
|
|
|
|
pages on LRU. To add more controls, we have to take care of performance.
|
|
|
|
|
|
|
|
Brief summary of control files.
|
|
|
|
|
|
|
|
tasks # attach a task(thread) and show list of threads
|
|
|
|
cgroup.procs # show list of processes
|
|
|
|
cgroup.event_control # an interface for event_fd()
|
2011-04-27 22:26:48 +00:00
|
|
|
memory.usage_in_bytes # show current res_counter usage for memory
|
|
|
|
(See 5.5 for details)
|
|
|
|
memory.memsw.usage_in_bytes # show current res_counter usage for memory+Swap
|
|
|
|
(See 5.5 for details)
|
2010-05-26 21:42:40 +00:00
|
|
|
memory.limit_in_bytes # set/show limit of memory usage
|
|
|
|
memory.memsw.limit_in_bytes # set/show limit of memory+Swap usage
|
|
|
|
memory.failcnt # show the number of memory usage hits limits
|
|
|
|
memory.memsw.failcnt # show the number of memory+Swap hits limits
|
|
|
|
memory.max_usage_in_bytes # show max memory usage recorded
|
|
|
|
memory.memsw.usage_in_bytes # show max memory+Swap usage recorded
|
|
|
|
memory.soft_limit_in_bytes # set/show soft limit of memory usage
|
|
|
|
memory.stat # show various statistics
|
|
|
|
memory.use_hierarchy # set/show hierarchical account enabled
|
|
|
|
memory.force_empty # trigger forced move charge to parent
|
|
|
|
memory.swappiness # set/show swappiness parameter of vmscan
|
|
|
|
(See sysctl's vm.swappiness)
|
|
|
|
memory.move_charge_at_immigrate # set/show controls of moving charges
|
|
|
|
memory.oom_control # set/show oom controls.
|
2011-06-15 22:08:16 +00:00
|
|
|
memory.numa_stat # show the number of memory usage per numa node
|
2010-05-26 21:42:40 +00:00
|
|
|
|
2008-02-07 08:13:46 +00:00
|
|
|
1. History
|
|
|
|
|
|
|
|
The memory controller has a long history. A request for comments for the memory
|
|
|
|
controller was posted by Balbir Singh [1]. At the time the RFC was posted
|
|
|
|
there were several implementations for memory control. The goal of the
|
|
|
|
RFC was to build consensus and agreement for the minimal features required
|
|
|
|
for memory control. The first RSS controller was posted by Balbir Singh[2]
|
|
|
|
in Feb 2007. Pavel Emelianov [3][4][5] has since posted three versions of the
|
|
|
|
RSS controller. At OLS, at the resource management BoF, everyone suggested
|
|
|
|
that we handle both page cache and RSS together. Another request was raised
|
|
|
|
to allow user space handling of OOM. The current memory controller is
|
|
|
|
at version 6; it combines both mapped (RSS) and unmapped Page
|
|
|
|
Cache Control [11].
|
|
|
|
|
|
|
|
2. Memory Control
|
|
|
|
|
|
|
|
Memory is a unique resource in the sense that it is present in a limited
|
|
|
|
amount. If a task requires a lot of CPU processing, the task can spread
|
|
|
|
its processing over a period of hours, days, months or years, but with
|
|
|
|
memory, the same physical memory needs to be reused to accomplish the task.
|
|
|
|
|
|
|
|
The memory controller implementation has been divided into phases. These
|
|
|
|
are:
|
|
|
|
|
|
|
|
1. Memory controller
|
|
|
|
2. mlock(2) controller
|
|
|
|
3. Kernel user memory accounting and slab control
|
|
|
|
4. user mappings length controller
|
|
|
|
|
|
|
|
The memory controller is the first controller developed.
|
|
|
|
|
|
|
|
2.1. Design
|
|
|
|
|
|
|
|
The core of the design is a counter called the res_counter. The res_counter
|
|
|
|
tracks the current memory usage and limit of the group of processes associated
|
|
|
|
with the controller. Each cgroup has a memory controller specific data
|
|
|
|
structure (mem_cgroup) associated with it.
|
|
|
|
|
|
|
|
2.2. Accounting
|
|
|
|
|
|
|
|
+--------------------+
|
|
|
|
| mem_cgroup |
|
|
|
|
| (res_counter) |
|
|
|
|
+--------------------+
|
|
|
|
/ ^ \
|
|
|
|
/ | \
|
|
|
|
+---------------+ | +---------------+
|
|
|
|
| mm_struct | |.... | mm_struct |
|
|
|
|
| | | | |
|
|
|
|
+---------------+ | +---------------+
|
|
|
|
|
|
|
|
|
+ --------------+
|
|
|
|
|
|
|
|
|
+---------------+ +------+--------+
|
|
|
|
| page +----------> page_cgroup|
|
|
|
|
| | | |
|
|
|
|
+---------------+ +---------------+
|
|
|
|
|
|
|
|
(Figure 1: Hierarchy of Accounting)
|
|
|
|
|
|
|
|
|
|
|
|
Figure 1 shows the important aspects of the controller
|
|
|
|
|
|
|
|
1. Accounting happens per cgroup
|
|
|
|
2. Each mm_struct knows about which cgroup it belongs to
|
|
|
|
3. Each page has a pointer to the page_cgroup, which in turn knows the
|
|
|
|
cgroup it belongs to
|
|
|
|
|
|
|
|
The accounting is done as follows: mem_cgroup_charge() is invoked to setup
|
|
|
|
the necessary data structures and check if the cgroup that is being charged
|
|
|
|
is over its limit. If it is then reclaim is invoked on the cgroup.
|
|
|
|
More details can be found in the reclaim section of this document.
|
|
|
|
If everything goes well, a page meta-data-structure called page_cgroup is
|
2010-05-26 21:42:40 +00:00
|
|
|
updated. page_cgroup has its own LRU on cgroup.
|
|
|
|
(*) page_cgroup structure is allocated at boot/memory-hotplug time.
|
2008-02-07 08:13:46 +00:00
|
|
|
|
|
|
|
2.2.1 Accounting details
|
|
|
|
|
2008-10-19 03:28:10 +00:00
|
|
|
All mapped anon pages (RSS) and cache pages (Page Cache) are accounted.
|
2010-05-26 21:42:40 +00:00
|
|
|
Some pages which are never reclaimable and will not be on the global LRU
|
|
|
|
are not accounted. We just account pages under usual VM management.
|
2008-10-19 03:28:10 +00:00
|
|
|
|
|
|
|
RSS pages are accounted at page_fault unless they've already been accounted
|
|
|
|
for earlier. A file page will be accounted for as Page Cache when it's
|
|
|
|
inserted into inode (radix-tree). While it's mapped into the page tables of
|
|
|
|
processes, duplicate accounting is carefully avoided.
|
|
|
|
|
|
|
|
A RSS page is unaccounted when it's fully unmapped. A PageCache page is
|
2010-05-26 21:42:40 +00:00
|
|
|
unaccounted when it's removed from radix-tree. Even if RSS pages are fully
|
|
|
|
unmapped (by kswapd), they may exist as SwapCache in the system until they
|
|
|
|
are really freed. Such SwapCaches also also accounted.
|
|
|
|
A swapped-in page is not accounted until it's mapped.
|
|
|
|
|
|
|
|
Note: The kernel does swapin-readahead and read multiple swaps at once.
|
|
|
|
This means swapped-in pages may contain pages for other tasks than a task
|
|
|
|
causing page fault. So, we avoid accounting at swap-in I/O.
|
2008-10-19 03:28:10 +00:00
|
|
|
|
|
|
|
At page migration, accounting information is kept.
|
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
Note: we just account pages-on-LRU because our purpose is to control amount
|
|
|
|
of used pages; not-on-LRU pages tend to be out-of-control from VM view.
|
2008-02-07 08:13:46 +00:00
|
|
|
|
|
|
|
2.3 Shared Page Accounting
|
|
|
|
|
|
|
|
Shared pages are accounted on the basis of the first touch approach. The
|
|
|
|
cgroup that first touches a page is accounted for the page. The principle
|
|
|
|
behind this approach is that a cgroup that aggressively uses a shared
|
|
|
|
page will eventually get charged for it (once it is uncharged from
|
|
|
|
the cgroup that brought it in -- this will happen on memory pressure).
|
|
|
|
|
2011-06-15 20:00:47 +00:00
|
|
|
Exception: If CONFIG_CGROUP_CGROUP_MEM_RES_CTLR_SWAP is not used.
|
2009-01-08 02:08:00 +00:00
|
|
|
When you do swapoff and make swapped-out pages of shmem(tmpfs) to
|
2009-01-08 02:07:56 +00:00
|
|
|
be backed into memory in force, charges for pages are accounted against the
|
|
|
|
caller of swapoff rather than the users of shmem.
|
|
|
|
|
|
|
|
|
2009-01-08 02:08:00 +00:00
|
|
|
2.4 Swap Extension (CONFIG_CGROUP_MEM_RES_CTLR_SWAP)
|
2010-05-26 21:42:40 +00:00
|
|
|
|
2009-01-08 02:08:00 +00:00
|
|
|
Swap Extension allows you to record charge for swap. A swapped-in page is
|
|
|
|
charged back to original page allocator if possible.
|
|
|
|
|
|
|
|
When swap is accounted, following files are added.
|
|
|
|
- memory.memsw.usage_in_bytes.
|
|
|
|
- memory.memsw.limit_in_bytes.
|
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
memsw means memory+swap. Usage of memory+swap is limited by
|
|
|
|
memsw.limit_in_bytes.
|
|
|
|
|
|
|
|
Example: Assume a system with 4G of swap. A task which allocates 6G of memory
|
|
|
|
(by mistake) under 2G memory limitation will use all swap.
|
|
|
|
In this case, setting memsw.limit_in_bytes=3G will prevent bad use of swap.
|
|
|
|
By using memsw limit, you can avoid system OOM which can be caused by swap
|
|
|
|
shortage.
|
2009-01-08 02:08:00 +00:00
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
* why 'memory+swap' rather than swap.
|
2009-01-08 02:08:00 +00:00
|
|
|
The global LRU(kswapd) can swap out arbitrary pages. Swap-out means
|
|
|
|
to move account from memory to swap...there is no change in usage of
|
2010-05-26 21:42:40 +00:00
|
|
|
memory+swap. In other words, when we want to limit the usage of swap without
|
|
|
|
affecting global LRU, memory+swap limit is better than just limiting swap from
|
2009-06-17 23:27:19 +00:00
|
|
|
OS point of view.
|
|
|
|
|
|
|
|
* What happens when a cgroup hits memory.memsw.limit_in_bytes
|
2011-06-15 20:00:47 +00:00
|
|
|
When a cgroup hits memory.memsw.limit_in_bytes, it's useless to do swap-out
|
2009-06-17 23:27:19 +00:00
|
|
|
in this cgroup. Then, swap-out will not be done by cgroup routine and file
|
|
|
|
caches are dropped. But as mentioned above, global LRU can do swapout memory
|
|
|
|
from it for sanity of the system's memory management state. You can't forbid
|
|
|
|
it by cgroup.
|
2009-01-08 02:08:00 +00:00
|
|
|
|
|
|
|
2.5 Reclaim
|
2008-02-07 08:13:46 +00:00
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
Each cgroup maintains a per cgroup LRU which has the same structure as
|
|
|
|
global VM. When a cgroup goes over its limit, we first try
|
2008-02-07 08:13:46 +00:00
|
|
|
to reclaim memory from the cgroup so as to make space for the new
|
|
|
|
pages that the cgroup has touched. If the reclaim is unsuccessful,
|
|
|
|
an OOM routine is invoked to select and kill the bulkiest task in the
|
2010-05-26 21:42:40 +00:00
|
|
|
cgroup. (See 10. OOM Control below.)
|
2008-02-07 08:13:46 +00:00
|
|
|
|
|
|
|
The reclaim algorithm has not been modified for cgroups, except that
|
|
|
|
pages that are selected for reclaiming come from the per cgroup LRU
|
|
|
|
list.
|
|
|
|
|
2009-09-23 22:56:32 +00:00
|
|
|
NOTE: Reclaim does not work for the root cgroup, since we cannot set any
|
|
|
|
limits on the root cgroup.
|
|
|
|
|
memcg: handle panic_on_oom=always case
Presently, if panic_on_oom=2, the whole system panics even if the oom
happend in some special situation (as cpuset, mempolicy....). Then,
panic_on_oom=2 means painc_on_oom_always.
Now, memcg doesn't check panic_on_oom flag. This patch adds a check.
BTW, how it's useful ?
kdump+panic_on_oom=2 is the last tool to investigate what happens in
oom-ed system. When a task is killed, the sysytem recovers and there will
be few hint to know what happnes. In mission critical system, oom should
never happen. Then, panic_on_oom=2+kdump is useful to avoid next OOM by
knowing precise information via snapshot.
TODO:
- For memcg, it's for isolate system's memory usage, oom-notiifer and
freeze_at_oom (or rest_at_oom) should be implemented. Then, management
daemon can do similar jobs (as kdump) or taking snapshot per cgroup.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Reviewed-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-03-10 23:22:32 +00:00
|
|
|
Note2: When panic_on_oom is set to "2", the whole system will panic.
|
|
|
|
|
2010-05-26 21:42:36 +00:00
|
|
|
When oom event notifier is registered, event will be delivered.
|
|
|
|
(See oom_control section)
|
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
2.6 Locking
|
2008-02-07 08:13:46 +00:00
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
lock_page_cgroup()/unlock_page_cgroup() should not be called under
|
|
|
|
mapping->tree_lock.
|
2008-02-07 08:13:46 +00:00
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
Other lock order is following:
|
|
|
|
PG_locked.
|
|
|
|
mm->page_table_lock
|
|
|
|
zone->lru_lock
|
|
|
|
lock_page_cgroup.
|
|
|
|
In many cases, just lock_page_cgroup() is called.
|
|
|
|
per-zone-per-cgroup LRU (cgroup's private LRU) is just guarded by
|
|
|
|
zone->lru_lock, it has no lock of its own.
|
2008-02-07 08:13:46 +00:00
|
|
|
|
|
|
|
3. User Interface
|
|
|
|
|
|
|
|
0. Configuration
|
|
|
|
|
|
|
|
a. Enable CONFIG_CGROUPS
|
|
|
|
b. Enable CONFIG_RESOURCE_COUNTERS
|
2008-03-04 22:28:39 +00:00
|
|
|
c. Enable CONFIG_CGROUP_MEM_RES_CTLR
|
2010-05-26 21:42:40 +00:00
|
|
|
d. Enable CONFIG_CGROUP_MEM_RES_CTLR_SWAP (to use swap extension)
|
2008-02-07 08:13:46 +00:00
|
|
|
|
2011-06-15 19:59:45 +00:00
|
|
|
1. Prepare the cgroups (see cgroups.txt, Why are cgroups needed?)
|
|
|
|
# mount -t tmpfs none /sys/fs/cgroup
|
|
|
|
# mkdir /sys/fs/cgroup/memory
|
|
|
|
# mount -t cgroup none /sys/fs/cgroup/memory -o memory
|
2008-02-07 08:13:46 +00:00
|
|
|
|
|
|
|
2. Make the new group and move bash into it
|
2011-06-15 19:59:45 +00:00
|
|
|
# mkdir /sys/fs/cgroup/memory/0
|
|
|
|
# echo $$ > /sys/fs/cgroup/memory/0/tasks
|
2008-02-07 08:13:46 +00:00
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
Since now we're in the 0 cgroup, we can alter the memory limit:
|
2011-06-15 19:59:45 +00:00
|
|
|
# echo 4M > /sys/fs/cgroup/memory/0/memory.limit_in_bytes
|
2008-02-07 08:13:57 +00:00
|
|
|
|
|
|
|
NOTE: We can use a suffix (k, K, m, M, g or G) to indicate values in kilo,
|
2010-05-26 21:42:40 +00:00
|
|
|
mega or gigabytes. (Here, Kilo, Mega, Giga are Kibibytes, Mebibytes, Gibibytes.)
|
|
|
|
|
2009-06-17 23:27:20 +00:00
|
|
|
NOTE: We can write "-1" to reset the *.limit_in_bytes(unlimited).
|
2009-09-23 22:56:32 +00:00
|
|
|
NOTE: We cannot set limits on the root cgroup any more.
|
2008-02-07 08:13:57 +00:00
|
|
|
|
2011-06-15 19:59:45 +00:00
|
|
|
# cat /sys/fs/cgroup/memory/0/memory.limit_in_bytes
|
2008-02-23 23:24:12 +00:00
|
|
|
4194304
|
2008-02-07 08:13:57 +00:00
|
|
|
|
2008-02-07 08:13:46 +00:00
|
|
|
We can check the usage:
|
2011-06-15 19:59:45 +00:00
|
|
|
# cat /sys/fs/cgroup/memory/0/memory.usage_in_bytes
|
2008-02-23 23:24:12 +00:00
|
|
|
1216512
|
2008-02-07 08:13:57 +00:00
|
|
|
|
|
|
|
A successful write to this file does not guarantee a successful set of
|
2010-05-26 21:42:40 +00:00
|
|
|
this limit to the value written into the file. This can be due to a
|
2008-02-07 08:13:57 +00:00
|
|
|
number of factors, such as rounding up to page boundaries or the total
|
2010-05-26 21:42:40 +00:00
|
|
|
availability of memory on the system. The user is required to re-read
|
2008-02-07 08:13:57 +00:00
|
|
|
this file after a write to guarantee the value committed by the kernel.
|
|
|
|
|
2008-03-04 22:28:24 +00:00
|
|
|
# echo 1 > memory.limit_in_bytes
|
2008-02-07 08:13:57 +00:00
|
|
|
# cat memory.limit_in_bytes
|
2008-02-23 23:24:12 +00:00
|
|
|
4096
|
2008-02-07 08:13:46 +00:00
|
|
|
|
|
|
|
The memory.failcnt field gives the number of times that the cgroup limit was
|
|
|
|
exceeded.
|
|
|
|
|
2008-02-07 08:14:41 +00:00
|
|
|
The memory.stat file gives accounting information. Now, the number of
|
|
|
|
caches, RSS and Active pages/Inactive pages are shown.
|
|
|
|
|
2008-02-07 08:13:46 +00:00
|
|
|
4. Testing
|
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
For testing features and implementation, see memcg_test.txt.
|
|
|
|
|
|
|
|
Performance test is also important. To see pure memory controller's overhead,
|
|
|
|
testing on tmpfs will give you good numbers of small overheads.
|
|
|
|
Example: do kernel make on tmpfs.
|
|
|
|
|
|
|
|
Page-fault scalability is also important. At measuring parallel
|
|
|
|
page fault test, multi-process test may be better than multi-thread
|
|
|
|
test because it has noise of shared objects/status.
|
|
|
|
|
|
|
|
But the above two are testing extreme situations.
|
|
|
|
Trying usual test under memory controller is always helpful.
|
2008-02-07 08:13:46 +00:00
|
|
|
|
|
|
|
4.1 Troubleshooting
|
|
|
|
|
|
|
|
Sometimes a user might find that the application under a cgroup is
|
2010-05-26 21:42:40 +00:00
|
|
|
terminated by OOM killer. There are several causes for this:
|
2008-02-07 08:13:46 +00:00
|
|
|
|
|
|
|
1. The cgroup limit is too low (just too low to do anything useful)
|
|
|
|
2. The user is using anonymous memory and swap is turned off or too low
|
|
|
|
|
|
|
|
A sync followed by echo 1 > /proc/sys/vm/drop_caches will help get rid of
|
|
|
|
some of the pages cached in the cgroup (page cache pages).
|
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
To know what happens, disable OOM_Kill by 10. OOM Control(see below) and
|
|
|
|
seeing what happens will be helpful.
|
|
|
|
|
2008-02-07 08:13:46 +00:00
|
|
|
4.2 Task migration
|
|
|
|
|
2010-04-22 22:08:02 +00:00
|
|
|
When a task migrates from one cgroup to another, its charge is not
|
2010-03-10 23:22:13 +00:00
|
|
|
carried forward by default. The pages allocated from the original cgroup still
|
2008-02-07 08:13:46 +00:00
|
|
|
remain charged to it, the charge is dropped when the page is freed or
|
|
|
|
reclaimed.
|
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
You can move charges of a task along with task migration.
|
|
|
|
See 8. "Move charges at task migration"
|
2010-03-10 23:22:13 +00:00
|
|
|
|
2008-02-07 08:13:46 +00:00
|
|
|
4.3 Removing a cgroup
|
|
|
|
|
|
|
|
A cgroup can be removed by rmdir, but as discussed in sections 4.1 and 4.2, a
|
|
|
|
cgroup might have some charge associated with it, even though all
|
2010-05-26 21:42:40 +00:00
|
|
|
tasks have migrated away from it. (because we charge against pages, not
|
|
|
|
against tasks.)
|
|
|
|
|
|
|
|
Such charges are freed or moved to their parent. At moving, both of RSS
|
|
|
|
and CACHES are moved to parent.
|
|
|
|
rmdir() may return -EBUSY if freeing/moving fails. See 5.1 also.
|
2008-02-07 08:13:46 +00:00
|
|
|
|
2009-01-08 02:08:00 +00:00
|
|
|
Charges recorded in swap information is not updated at removal of cgroup.
|
|
|
|
Recorded information is discarded and a cgroup which uses swap (swapcache)
|
|
|
|
will be charged as a new owner of it.
|
|
|
|
|
|
|
|
|
2009-01-08 02:07:55 +00:00
|
|
|
5. Misc. interfaces.
|
|
|
|
|
|
|
|
5.1 force_empty
|
|
|
|
memory.force_empty interface is provided to make cgroup's memory usage empty.
|
|
|
|
You can use this interface only when the cgroup has no tasks.
|
|
|
|
When writing anything to this
|
|
|
|
|
|
|
|
# echo 0 > memory.force_empty
|
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
Almost all pages tracked by this memory cgroup will be unmapped and freed.
|
|
|
|
Some pages cannot be freed because they are locked or in-use. Such pages are
|
|
|
|
moved to parent and this cgroup will be empty. This may return -EBUSY if
|
|
|
|
VM is too busy to free/move all pages immediately.
|
2009-01-08 02:07:55 +00:00
|
|
|
|
|
|
|
Typical use case of this interface is that calling this before rmdir().
|
|
|
|
Because rmdir() moves all pages to parent, some out-of-use page caches can be
|
|
|
|
moved to the parent. If you want to avoid that, force_empty will be useful.
|
|
|
|
|
2009-01-08 02:08:22 +00:00
|
|
|
5.2 stat file
|
2009-04-13 21:40:15 +00:00
|
|
|
|
|
|
|
memory.stat file includes following statistics
|
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
# per-memory cgroup local status
|
2009-04-13 21:40:15 +00:00
|
|
|
cache - # of bytes of page cache memory.
|
|
|
|
rss - # of bytes of anonymous and swap cache memory.
|
2010-05-26 21:42:40 +00:00
|
|
|
mapped_file - # of bytes of mapped file (includes tmpfs/shmem)
|
2009-04-13 21:40:15 +00:00
|
|
|
pgpgin - # of pages paged in (equivalent to # of charging events).
|
|
|
|
pgpgout - # of pages paged out (equivalent to # of uncharging events).
|
2010-05-26 21:42:40 +00:00
|
|
|
swap - # of bytes of swap usage
|
2009-04-13 21:40:15 +00:00
|
|
|
inactive_anon - # of bytes of anonymous memory and swap cache memory on
|
2010-05-26 21:42:40 +00:00
|
|
|
LRU list.
|
|
|
|
active_anon - # of bytes of anonymous and swap cache memory on active
|
|
|
|
inactive LRU list.
|
|
|
|
inactive_file - # of bytes of file-backed memory on inactive LRU list.
|
|
|
|
active_file - # of bytes of file-backed memory on active LRU list.
|
2009-04-13 21:40:15 +00:00
|
|
|
unevictable - # of bytes of memory that cannot be reclaimed (mlocked etc).
|
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
# status considering hierarchy (see memory.use_hierarchy settings)
|
|
|
|
|
|
|
|
hierarchical_memory_limit - # of bytes of memory limit with regard to hierarchy
|
|
|
|
under which the memory cgroup is
|
|
|
|
hierarchical_memsw_limit - # of bytes of memory+swap limit with regard to
|
|
|
|
hierarchy under which memory cgroup is.
|
|
|
|
|
|
|
|
total_cache - sum of all children's "cache"
|
|
|
|
total_rss - sum of all children's "rss"
|
|
|
|
total_mapped_file - sum of all children's "cache"
|
|
|
|
total_pgpgin - sum of all children's "pgpgin"
|
|
|
|
total_pgpgout - sum of all children's "pgpgout"
|
|
|
|
total_swap - sum of all children's "swap"
|
|
|
|
total_inactive_anon - sum of all children's "inactive_anon"
|
|
|
|
total_active_anon - sum of all children's "active_anon"
|
|
|
|
total_inactive_file - sum of all children's "inactive_file"
|
|
|
|
total_active_file - sum of all children's "active_file"
|
|
|
|
total_unevictable - sum of all children's "unevictable"
|
|
|
|
|
|
|
|
# The following additional stats are dependent on CONFIG_DEBUG_VM.
|
2009-04-13 21:40:15 +00:00
|
|
|
|
|
|
|
inactive_ratio - VM internal parameter. (see mm/page_alloc.c)
|
|
|
|
recent_rotated_anon - VM internal parameter. (see mm/vmscan.c)
|
|
|
|
recent_rotated_file - VM internal parameter. (see mm/vmscan.c)
|
|
|
|
recent_scanned_anon - VM internal parameter. (see mm/vmscan.c)
|
|
|
|
recent_scanned_file - VM internal parameter. (see mm/vmscan.c)
|
|
|
|
|
|
|
|
Memo:
|
2010-05-26 21:42:40 +00:00
|
|
|
recent_rotated means recent frequency of LRU rotation.
|
|
|
|
recent_scanned means recent # of scans to LRU.
|
2009-01-08 02:08:22 +00:00
|
|
|
showing for better debug please see the code for meanings.
|
|
|
|
|
2009-04-13 21:40:15 +00:00
|
|
|
Note:
|
|
|
|
Only anonymous and swap cache memory is listed as part of 'rss' stat.
|
|
|
|
This should not be confused with the true 'resident set size' or the
|
2010-05-26 21:42:40 +00:00
|
|
|
amount of physical memory used by the cgroup.
|
|
|
|
'rss + file_mapped" will give you resident set size of cgroup.
|
|
|
|
(Note: file and shmem may be shared among other cgroups. In that case,
|
|
|
|
file_mapped is accounted only when the memory cgroup is owner of page
|
|
|
|
cache.)
|
2009-01-08 02:08:22 +00:00
|
|
|
|
2009-01-08 02:08:24 +00:00
|
|
|
5.3 swappiness
|
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
Similar to /proc/sys/vm/swappiness, but affecting a hierarchy of groups only.
|
2009-01-08 02:08:24 +00:00
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
Following cgroups' swappiness can't be changed.
|
|
|
|
- root cgroup (uses /proc/sys/vm/swappiness).
|
|
|
|
- a cgroup which uses hierarchy and it has other cgroup(s) below it.
|
|
|
|
- a cgroup which uses hierarchy and not the root of hierarchy.
|
|
|
|
|
|
|
|
5.4 failcnt
|
|
|
|
|
|
|
|
A memory cgroup provides memory.failcnt and memory.memsw.failcnt files.
|
|
|
|
This failcnt(== failure count) shows the number of times that a usage counter
|
|
|
|
hit its limit. When a memory cgroup hits a limit, failcnt increases and
|
|
|
|
memory under it will be reclaimed.
|
|
|
|
|
|
|
|
You can reset failcnt by writing 0 to failcnt file.
|
|
|
|
# echo 0 > .../memory.failcnt
|
2009-01-08 02:08:24 +00:00
|
|
|
|
2011-04-27 22:26:48 +00:00
|
|
|
5.5 usage_in_bytes
|
|
|
|
|
|
|
|
For efficiency, as other kernel components, memory cgroup uses some optimization
|
|
|
|
to avoid unnecessary cacheline false sharing. usage_in_bytes is affected by the
|
|
|
|
method and doesn't show 'exact' value of memory(and swap) usage, it's an fuzz
|
|
|
|
value for efficient access. (Of course, when necessary, it's synchronized.)
|
|
|
|
If you want to know more exact memory usage, you should use RSS+CACHE(+SWAP)
|
|
|
|
value in memory.stat(see 5.2).
|
|
|
|
|
2011-06-15 22:08:16 +00:00
|
|
|
5.6 numa_stat
|
|
|
|
|
|
|
|
This is similar to numa_maps but operates on a per-memcg basis. This is
|
|
|
|
useful for providing visibility into the numa locality information within
|
|
|
|
an memcg since the pages are allowed to be allocated from any physical
|
|
|
|
node. One of the usecases is evaluating application performance by
|
|
|
|
combining this information with the application's cpu allocation.
|
|
|
|
|
|
|
|
We export "total", "file", "anon" and "unevictable" pages per-node for
|
|
|
|
each memcg. The ouput format of memory.numa_stat is:
|
|
|
|
|
|
|
|
total=<total pages> N0=<node 0 pages> N1=<node 1 pages> ...
|
|
|
|
file=<total file pages> N0=<node 0 pages> N1=<node 1 pages> ...
|
|
|
|
anon=<total anon pages> N0=<node 0 pages> N1=<node 1 pages> ...
|
|
|
|
unevictable=<total anon pages> N0=<node 0 pages> N1=<node 1 pages> ...
|
|
|
|
|
|
|
|
And we have total = file + anon + unevictable.
|
|
|
|
|
2009-01-08 02:08:03 +00:00
|
|
|
6. Hierarchy support
|
2009-01-08 02:07:55 +00:00
|
|
|
|
2009-01-08 02:08:03 +00:00
|
|
|
The memory controller supports a deep hierarchy and hierarchical accounting.
|
|
|
|
The hierarchy is created by creating the appropriate cgroups in the
|
|
|
|
cgroup filesystem. Consider for example, the following cgroup filesystem
|
|
|
|
hierarchy
|
|
|
|
|
2011-06-15 20:00:47 +00:00
|
|
|
root
|
2009-01-08 02:08:03 +00:00
|
|
|
/ | \
|
2011-06-15 20:00:47 +00:00
|
|
|
/ | \
|
|
|
|
a b c
|
|
|
|
| \
|
|
|
|
| \
|
|
|
|
d e
|
2009-01-08 02:08:03 +00:00
|
|
|
|
|
|
|
In the diagram above, with hierarchical accounting enabled, all memory
|
|
|
|
usage of e, is accounted to its ancestors up until the root (i.e, c and root),
|
2010-05-26 21:42:40 +00:00
|
|
|
that has memory.use_hierarchy enabled. If one of the ancestors goes over its
|
2009-01-08 02:08:03 +00:00
|
|
|
limit, the reclaim algorithm reclaims from the tasks in the ancestor and the
|
|
|
|
children of the ancestor.
|
|
|
|
|
|
|
|
6.1 Enabling hierarchical accounting and reclaim
|
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
A memory cgroup by default disables the hierarchy feature. Support
|
2009-01-08 02:08:03 +00:00
|
|
|
can be enabled by writing 1 to memory.use_hierarchy file of the root cgroup
|
|
|
|
|
|
|
|
# echo 1 > memory.use_hierarchy
|
|
|
|
|
|
|
|
The feature can be disabled by
|
|
|
|
|
|
|
|
# echo 0 > memory.use_hierarchy
|
|
|
|
|
2011-02-17 01:51:23 +00:00
|
|
|
NOTE1: Enabling/disabling will fail if either the cgroup already has other
|
|
|
|
cgroups created below it, or if the parent cgroup has use_hierarchy
|
|
|
|
enabled.
|
2009-01-08 02:08:03 +00:00
|
|
|
|
memcg: handle panic_on_oom=always case
Presently, if panic_on_oom=2, the whole system panics even if the oom
happend in some special situation (as cpuset, mempolicy....). Then,
panic_on_oom=2 means painc_on_oom_always.
Now, memcg doesn't check panic_on_oom flag. This patch adds a check.
BTW, how it's useful ?
kdump+panic_on_oom=2 is the last tool to investigate what happens in
oom-ed system. When a task is killed, the sysytem recovers and there will
be few hint to know what happnes. In mission critical system, oom should
never happen. Then, panic_on_oom=2+kdump is useful to avoid next OOM by
knowing precise information via snapshot.
TODO:
- For memcg, it's for isolate system's memory usage, oom-notiifer and
freeze_at_oom (or rest_at_oom) should be implemented. Then, management
daemon can do similar jobs (as kdump) or taking snapshot per cgroup.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Reviewed-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-03-10 23:22:32 +00:00
|
|
|
NOTE2: When panic_on_oom is set to "2", the whole system will panic in
|
2010-05-26 21:42:40 +00:00
|
|
|
case of an OOM event in any cgroup.
|
2009-01-08 02:08:03 +00:00
|
|
|
|
2009-09-23 22:56:34 +00:00
|
|
|
7. Soft limits
|
|
|
|
|
|
|
|
Soft limits allow for greater sharing of memory. The idea behind soft limits
|
|
|
|
is to allow control groups to use as much of the memory as needed, provided
|
|
|
|
|
|
|
|
a. There is no memory contention
|
|
|
|
b. They do not exceed their hard limit
|
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
When the system detects memory contention or low memory, control groups
|
2009-09-23 22:56:34 +00:00
|
|
|
are pushed back to their soft limits. If the soft limit of each control
|
|
|
|
group is very high, they are pushed back as much as possible to make
|
|
|
|
sure that one control group does not starve the others of memory.
|
|
|
|
|
|
|
|
Please note that soft limits is a best effort feature, it comes with
|
|
|
|
no guarantees, but it does its best to make sure that when memory is
|
|
|
|
heavily contended for, memory is allocated based on the soft limit
|
|
|
|
hints/setup. Currently soft limit based reclaim is setup such that
|
|
|
|
it gets invoked from balance_pgdat (kswapd).
|
|
|
|
|
|
|
|
7.1 Interface
|
|
|
|
|
|
|
|
Soft limits can be setup by using the following commands (in this example we
|
2010-05-26 21:42:40 +00:00
|
|
|
assume a soft limit of 256 MiB)
|
2009-09-23 22:56:34 +00:00
|
|
|
|
|
|
|
# echo 256M > memory.soft_limit_in_bytes
|
|
|
|
|
|
|
|
If we want to change this to 1G, we can at any time use
|
|
|
|
|
|
|
|
# echo 1G > memory.soft_limit_in_bytes
|
|
|
|
|
|
|
|
NOTE1: Soft limits take effect over a long period of time, since they involve
|
|
|
|
reclaiming memory for balancing between memory cgroups
|
|
|
|
NOTE2: It is recommended to set the soft limit always below the hard limit,
|
|
|
|
otherwise the hard limit will take precedence.
|
|
|
|
|
2010-03-10 23:22:13 +00:00
|
|
|
8. Move charges at task migration
|
|
|
|
|
|
|
|
Users can move charges associated with a task along with task migration, that
|
|
|
|
is, uncharge task's pages from the old cgroup and charge them to the new cgroup.
|
2010-03-10 23:22:17 +00:00
|
|
|
This feature is not supported in !CONFIG_MMU environments because of lack of
|
|
|
|
page tables.
|
2010-03-10 23:22:13 +00:00
|
|
|
|
|
|
|
8.1 Interface
|
|
|
|
|
|
|
|
This feature is disabled by default. It can be enabled(and disabled again) by
|
|
|
|
writing to memory.move_charge_at_immigrate of the destination cgroup.
|
|
|
|
|
|
|
|
If you want to enable it:
|
|
|
|
|
|
|
|
# echo (some positive value) > memory.move_charge_at_immigrate
|
|
|
|
|
|
|
|
Note: Each bits of move_charge_at_immigrate has its own meaning about what type
|
|
|
|
of charges should be moved. See 8.2 for details.
|
|
|
|
Note: Charges are moved only when you move mm->owner, IOW, a leader of a thread
|
|
|
|
group.
|
|
|
|
Note: If we cannot find enough space for the task in the destination cgroup, we
|
|
|
|
try to make space by reclaiming memory. Task migration may fail if we
|
|
|
|
cannot make enough space.
|
2010-05-26 21:42:40 +00:00
|
|
|
Note: It can take several seconds if you move charges much.
|
2010-03-10 23:22:13 +00:00
|
|
|
|
|
|
|
And if you want disable it again:
|
|
|
|
|
|
|
|
# echo 0 > memory.move_charge_at_immigrate
|
|
|
|
|
|
|
|
8.2 Type of charges which can be move
|
|
|
|
|
|
|
|
Each bits of move_charge_at_immigrate has its own meaning about what type of
|
2010-05-26 21:42:39 +00:00
|
|
|
charges should be moved. But in any cases, it must be noted that an account of
|
|
|
|
a page or a swap can be moved only when it is charged to the task's current(old)
|
|
|
|
memory cgroup.
|
2010-03-10 23:22:13 +00:00
|
|
|
|
|
|
|
bit | what type of charges would be moved ?
|
|
|
|
-----+------------------------------------------------------------------------
|
|
|
|
0 | A charge of an anonymous page(or swap of it) used by the target task.
|
|
|
|
| Those pages and swaps must be used only by the target task. You must
|
|
|
|
| enable Swap Extension(see 2.4) to enable move of swap charges.
|
2010-05-26 21:42:39 +00:00
|
|
|
-----+------------------------------------------------------------------------
|
|
|
|
1 | A charge of file pages(normal file, tmpfs file(e.g. ipc shared memory)
|
2010-05-26 21:42:40 +00:00
|
|
|
| and swaps of tmpfs file) mmapped by the target task. Unlike the case of
|
2010-05-26 21:42:39 +00:00
|
|
|
| anonymous pages, file pages(and swaps) in the range mmapped by the task
|
|
|
|
| will be moved even if the task hasn't done page fault, i.e. they might
|
|
|
|
| not be the task's "RSS", but other task's "RSS" that maps the same file.
|
|
|
|
| And mapcount of the page is ignored(the page can be moved even if
|
|
|
|
| page_mapcount(page) > 1). You must enable Swap Extension(see 2.4) to
|
|
|
|
| enable move of swap charges.
|
2010-03-10 23:22:13 +00:00
|
|
|
|
|
|
|
8.3 TODO
|
|
|
|
|
|
|
|
- Implement madvise(2) to let users decide the vma to be moved or not to be
|
|
|
|
moved.
|
|
|
|
- All of moving charge operations are done under cgroup_mutex. It's not good
|
|
|
|
behavior to hold the mutex too long, so we may need some trick.
|
|
|
|
|
2010-03-10 23:22:24 +00:00
|
|
|
9. Memory thresholds
|
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
Memory cgroup implements memory thresholds using cgroups notification
|
2010-03-10 23:22:24 +00:00
|
|
|
API (see cgroups.txt). It allows to register multiple memory and memsw
|
|
|
|
thresholds and gets notifications when it crosses.
|
|
|
|
|
|
|
|
To register a threshold application need:
|
2010-05-26 21:42:40 +00:00
|
|
|
- create an eventfd using eventfd(2);
|
|
|
|
- open memory.usage_in_bytes or memory.memsw.usage_in_bytes;
|
|
|
|
- write string like "<event_fd> <fd of memory.usage_in_bytes> <threshold>" to
|
|
|
|
cgroup.event_control.
|
2010-03-10 23:22:24 +00:00
|
|
|
|
|
|
|
Application will be notified through eventfd when memory usage crosses
|
|
|
|
threshold in any direction.
|
|
|
|
|
|
|
|
It's applicable for root and non-root cgroup.
|
|
|
|
|
2010-05-26 21:42:36 +00:00
|
|
|
10. OOM Control
|
|
|
|
|
2010-05-26 21:42:37 +00:00
|
|
|
memory.oom_control file is for OOM notification and other controls.
|
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
Memory cgroup implements OOM notifier using cgroup notification
|
|
|
|
API (See cgroups.txt). It allows to register multiple OOM notification
|
|
|
|
delivery and gets notification when OOM happens.
|
2010-05-26 21:42:36 +00:00
|
|
|
|
|
|
|
To register a notifier, application need:
|
|
|
|
- create an eventfd using eventfd(2)
|
|
|
|
- open memory.oom_control file
|
2010-05-26 21:42:40 +00:00
|
|
|
- write string like "<event_fd> <fd of memory.oom_control>" to
|
|
|
|
cgroup.event_control
|
2010-05-26 21:42:36 +00:00
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
Application will be notified through eventfd when OOM happens.
|
2010-05-26 21:42:36 +00:00
|
|
|
OOM notification doesn't work for root cgroup.
|
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
You can disable OOM-killer by writing "1" to memory.oom_control file, as:
|
|
|
|
|
2010-05-26 21:42:37 +00:00
|
|
|
#echo 1 > memory.oom_control
|
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
This operation is only allowed to the top cgroup of sub-hierarchy.
|
|
|
|
If OOM-killer is disabled, tasks under cgroup will hang/sleep
|
|
|
|
in memory cgroup's OOM-waitqueue when they request accountable memory.
|
2010-05-26 21:42:37 +00:00
|
|
|
|
2010-05-26 21:42:40 +00:00
|
|
|
For running them, you have to relax the memory cgroup's OOM status by
|
2010-05-26 21:42:37 +00:00
|
|
|
* enlarge limit or reduce usage.
|
|
|
|
To reduce usage,
|
|
|
|
* kill some tasks.
|
|
|
|
* move some tasks to other group with account migration.
|
|
|
|
* remove some files (on tmpfs?)
|
|
|
|
|
|
|
|
Then, stopped tasks will work again.
|
|
|
|
|
|
|
|
At reading, current status of OOM is shown.
|
|
|
|
oom_kill_disable 0 or 1 (if 1, oom-killer is disabled)
|
2010-05-26 21:42:40 +00:00
|
|
|
under_oom 0 or 1 (if 1, the memory cgroup is under OOM, tasks may
|
2010-05-26 21:42:37 +00:00
|
|
|
be stopped.)
|
2010-05-26 21:42:36 +00:00
|
|
|
|
|
|
|
11. TODO
|
2008-02-07 08:13:46 +00:00
|
|
|
|
|
|
|
1. Add support for accounting huge pages (as a separate controller)
|
2008-02-07 08:14:41 +00:00
|
|
|
2. Make per-cgroup scanner reclaim not-shared pages first
|
|
|
|
3. Teach controller to account for shared-pages
|
2008-07-25 08:47:20 +00:00
|
|
|
4. Start reclamation in the background when the limit is
|
2008-02-07 08:13:46 +00:00
|
|
|
not yet hit but the usage is getting closer
|
|
|
|
|
|
|
|
Summary
|
|
|
|
|
|
|
|
Overall, the memory controller has been a stable controller and has been
|
|
|
|
commented and discussed quite extensively in the community.
|
|
|
|
|
|
|
|
References
|
|
|
|
|
|
|
|
1. Singh, Balbir. RFC: Memory Controller, http://lwn.net/Articles/206697/
|
|
|
|
2. Singh, Balbir. Memory Controller (RSS Control),
|
|
|
|
http://lwn.net/Articles/222762/
|
|
|
|
3. Emelianov, Pavel. Resource controllers based on process cgroups
|
|
|
|
http://lkml.org/lkml/2007/3/6/198
|
|
|
|
4. Emelianov, Pavel. RSS controller based on process cgroups (v2)
|
2008-02-23 23:24:12 +00:00
|
|
|
http://lkml.org/lkml/2007/4/9/78
|
2008-02-07 08:13:46 +00:00
|
|
|
5. Emelianov, Pavel. RSS controller based on process cgroups (v3)
|
|
|
|
http://lkml.org/lkml/2007/5/30/244
|
|
|
|
6. Menage, Paul. Control Groups v10, http://lwn.net/Articles/236032/
|
|
|
|
7. Vaidyanathan, Srinivasan, Control Groups: Pagecache accounting and control
|
|
|
|
subsystem (v3), http://lwn.net/Articles/235534/
|
2008-02-23 23:24:12 +00:00
|
|
|
8. Singh, Balbir. RSS controller v2 test results (lmbench),
|
2008-02-07 08:13:46 +00:00
|
|
|
http://lkml.org/lkml/2007/5/17/232
|
2008-02-23 23:24:12 +00:00
|
|
|
9. Singh, Balbir. RSS controller v2 AIM9 results
|
2008-02-07 08:13:46 +00:00
|
|
|
http://lkml.org/lkml/2007/5/18/1
|
2008-02-23 23:24:12 +00:00
|
|
|
10. Singh, Balbir. Memory controller v6 test results,
|
2008-02-07 08:13:46 +00:00
|
|
|
http://lkml.org/lkml/2007/8/19/36
|
2008-02-23 23:24:12 +00:00
|
|
|
11. Singh, Balbir. Memory controller introduction (v6),
|
|
|
|
http://lkml.org/lkml/2007/8/17/69
|
2008-02-07 08:13:46 +00:00
|
|
|
12. Corbet, Jonathan, Controlling memory use in cgroups,
|
|
|
|
http://lwn.net/Articles/243795/
|