hugetlb: clean up and update huge pages documentation

Attempt to clarify huge page administration and usage, and updates the
doucmentation to mention the balancing of huge pages across nodes when
allocating and freeing.

Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Nishanth Aravamudan <nacc@us.ibm.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Adam Litke <agl@us.ibm.com>
Cc: Andy Whitcroft <apw@canonical.com>
Cc: Eric Whitney <eric.whitney@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Lee Schermerhorn 2009-09-21 17:01:24 -07:00 committed by Linus Torvalds
parent 685f345708
commit 41a25e7e67

View File

@ -40,12 +40,14 @@ where:
HugePages_Total is the size of the pool of huge pages. HugePages_Total is the size of the pool of huge pages.
HugePages_Free is the number of huge pages in the pool that are not yet HugePages_Free is the number of huge pages in the pool that are not yet
allocated. allocated.
HugePages_Rsvd is short for "reserved," and is the number of hugepages HugePages_Rsvd is short for "reserved," and is the number of huge pages for
for which a commitment to allocate from the pool has been made, but no which a commitment to allocate from the pool has been made,
allocation has yet been made. It's vaguely analogous to overcommit. but no allocation has yet been made. Reserved huge pages
guarantee that an application will be able to allocate a
huge page from the pool of huge pages at fault time.
HugePages_Surp is short for "surplus," and is the number of huge pages in HugePages_Surp is short for "surplus," and is the number of huge pages in
the pool above the value in /proc/sys/vm/nr_hugepages. The maximum the pool above the value in /proc/sys/vm/nr_hugepages. The
number of surplus hugepages is controlled by maximum number of surplus huge pages is controlled by
/proc/sys/vm/nr_overcommit_hugepages. /proc/sys/vm/nr_overcommit_hugepages.
/proc/filesystems should also show a filesystem of type "hugetlbfs" configured /proc/filesystems should also show a filesystem of type "hugetlbfs" configured
@ -67,27 +69,66 @@ use either the mmap system call or shared memory system calls to start using
the huge pages. It is required that the system administrator preallocate the huge pages. It is required that the system administrator preallocate
enough memory for huge page purposes. enough memory for huge page purposes.
Use the following command to dynamically allocate/deallocate hugepages: The administrator can preallocate huge pages on the kernel boot command line by
specifying the "hugepages=N" parameter, where 'N' = the number of huge pages
requested. This is the most reliable method for preallocating huge pages as
memory has not yet become fragmented.
Some platforms support multiple huge page sizes. To preallocate huge pages
of a specific size, one must preceed the huge pages boot command parameters
with a huge page size selection parameter "hugepagesz=<size>". <size> must
be specified in bytes with optional scale suffix [kKmMgG]. The default huge
page size may be selected with the "default_hugepagesz=<size>" boot parameter.
/proc/sys/vm/nr_hugepages indicates the current number of configured [default
size] hugetlb pages in the kernel. Super user can dynamically request more
(or free some pre-configured) huge pages.
Use the following command to dynamically allocate/deallocate default sized
huge pages:
echo 20 > /proc/sys/vm/nr_hugepages echo 20 > /proc/sys/vm/nr_hugepages
This command will try to configure 20 hugepages in the system. The success This command will try to configure 20 default sized huge pages in the system.
or failure of allocation depends on the amount of physically contiguous On a NUMA platform, the kernel will attempt to distribute the huge page pool
memory that is preset in system at this time. System administrators may want over the all on-line nodes. These huge pages, allocated when nr_hugepages
to put this command in one of the local rc init files. This will enable the is increased, are called "persistent huge pages".
kernel to request huge pages early in the boot process (when the possibility
of getting physical contiguous pages is still very high). In either
case, administrators will want to verify the number of hugepages actually
allocated by checking the sysctl or meminfo.
/proc/sys/vm/nr_overcommit_hugepages indicates how large the pool of The success or failure of huge page allocation depends on the amount of
physically contiguous memory that is preset in system at the time of the
allocation attempt. If the kernel is unable to allocate huge pages from
some nodes in a NUMA system, it will attempt to make up the difference by
allocating extra pages on other nodes with sufficient available contiguous
memory, if any.
System administrators may want to put this command in one of the local rc init
files. This will enable the kernel to request huge pages early in the boot
process when the possibility of getting physical contiguous pages is still
very high. Administrators can verify the number of huge pages actually
allocated by checking the sysctl or meminfo. To check the per node
distribution of huge pages in a NUMA system, use:
cat /sys/devices/system/node/node*/meminfo | fgrep Huge
/proc/sys/vm/nr_overcommit_hugepages specifies how large the pool of
huge pages can grow, if more huge pages than /proc/sys/vm/nr_hugepages are huge pages can grow, if more huge pages than /proc/sys/vm/nr_hugepages are
requested by applications. echo'ing any non-zero value into this file requested by applications. Writing any non-zero value into this file
indicates that the hugetlb subsystem is allowed to try to obtain indicates that the hugetlb subsystem is allowed to try to obtain "surplus"
hugepages from the buddy allocator, if the normal pool is exhausted. As huge pages from the buddy allocator, when the normal pool is exhausted. As
these surplus huge pages go out of use, they are freed back to the buddy these surplus huge pages go out of use, they are freed back to the buddy
allocator. allocator.
When increasing the huge page pool size via nr_hugepages, any surplus
pages will first be promoted to persistent huge pages. Then, additional
huge pages will be allocated, if necessary and if possible, to fulfill
the new huge page pool size.
The administrator may shrink the pool of preallocated huge pages for
the default huge page size by setting the nr_hugepages sysctl to a
smaller value. The kernel will attempt to balance the freeing of huge pages
across all on-line nodes. Any free huge pages on the selected nodes will
be freed back to the buddy allocator.
Caveat: Shrinking the pool via nr_hugepages such that it becomes less Caveat: Shrinking the pool via nr_hugepages such that it becomes less
than the number of huge pages in use will convert the balance to surplus than the number of huge pages in use will convert the balance to surplus
huge pages even if it would exceed the overcommit value. As long as huge pages even if it would exceed the overcommit value. As long as
@ -97,9 +138,9 @@ sufficiently, or the surplus huge pages go out of use and are freed.
With support for multiple huge page pools at run-time available, much of With support for multiple huge page pools at run-time available, much of
the huge page userspace interface has been duplicated in sysfs. The above the huge page userspace interface has been duplicated in sysfs. The above
information applies to the default hugepage size (which will be information applies to the default huge page size which will be
controlled by the proc interfaces for backwards compatibility). The root controlled by the /proc interfaces for backwards compatibility. The root
hugepage control directory is huge page control directory in sysfs is:
/sys/kernel/mm/hugepages /sys/kernel/mm/hugepages