NUMA balancing: optimize page placement for memory tiering system

With the advent of various new memory types, some machines will have
multiple types of memory, e.g.  DRAM and PMEM (persistent memory).  The
memory subsystem of these machines can be called memory tiering system,
because the performance of the different types of memory are usually
different.

In such system, because of the memory accessing pattern changing etc,
some pages in the slow memory may become hot globally.  So in this
patch, the NUMA balancing mechanism is enhanced to optimize the page
placement among the different memory types according to hot/cold
dynamically.

In a typical memory tiering system, there are CPUs, fast memory and slow
memory in each physical NUMA node.  The CPUs and the fast memory will be
put in one logical node (called fast memory node), while the slow memory
will be put in another (faked) logical node (called slow memory node).
That is, the fast memory is regarded as local while the slow memory is
regarded as remote.  So it's possible for the recently accessed pages in
the slow memory node to be promoted to the fast memory node via the
existing NUMA balancing mechanism.

The original NUMA balancing mechanism will stop to migrate pages if the
free memory of the target node becomes below the high watermark.  This
is a reasonable policy if there's only one memory type.  But this makes
the original NUMA balancing mechanism almost do not work to optimize
page placement among different memory types.  Details are as follows.

It's the common cases that the working-set size of the workload is
larger than the size of the fast memory nodes.  Otherwise, it's
unnecessary to use the slow memory at all.  So, there are almost always
no enough free pages in the fast memory nodes, so that the globally hot
pages in the slow memory node cannot be promoted to the fast memory
node.  To solve the issue, we have 2 choices as follows,

a. Ignore the free pages watermark checking when promoting hot pages
   from the slow memory node to the fast memory node.  This will
   create some memory pressure in the fast memory node, thus trigger
   the memory reclaiming.  So that, the cold pages in the fast memory
   node will be demoted to the slow memory node.

b. Define a new watermark called wmark_promo which is higher than
   wmark_high, and have kswapd reclaiming pages until free pages reach
   such watermark.  The scenario is as follows: when we want to promote
   hot-pages from a slow memory to a fast memory, but fast memory's free
   pages would go lower than high watermark with such promotion, we wake
   up kswapd with wmark_promo watermark in order to demote cold pages and
   free us up some space.  So, next time we want to promote hot-pages we
   might have a chance of doing so.

The choice "a" may create high memory pressure in the fast memory node.
If the memory pressure of the workload is high, the memory pressure
may become so high that the memory allocation latency of the workload
is influenced, e.g.  the direct reclaiming may be triggered.

The choice "b" works much better at this aspect.  If the memory
pressure of the workload is high, the hot pages promotion will stop
earlier because its allocation watermark is higher than that of the
normal memory allocation.  So in this patch, choice "b" is implemented.
A new zone watermark (WMARK_PROMO) is added.  Which is larger than the
high watermark and can be controlled via watermark_scale_factor.

In addition to the original page placement optimization among sockets,
the NUMA balancing mechanism is extended to be used to optimize page
placement according to hot/cold among different memory types.  So the
sysctl user space interface (numa_balancing) is extended in a backward
compatible way as follow, so that the users can enable/disable these
functionality individually.

The sysctl is converted from a Boolean value to a bits field.  The
definition of the flags is,

- 0: NUMA_BALANCING_DISABLED
- 1: NUMA_BALANCING_NORMAL
- 2: NUMA_BALANCING_MEMORY_TIERING

We have tested the patch with the pmbench memory accessing benchmark
with the 80:20 read/write ratio and the Gauss access address
distribution on a 2 socket Intel server with Optane DC Persistent
Memory Model.  The test results shows that the pmbench score can
improve up to 95.9%.

Thanks Andrew Morton to help fix the document format error.

Link: https://lkml.kernel.org/r/20220221084529.1052339-3-ying.huang@intel.com
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Yang Shi <shy828301@gmail.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Wei Xu <weixugc@google.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: zhongjiang-ali <zhongjiang-ali@linux.alibaba.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Feng Tang <feng.tang@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Huang Ying 2022-03-22 14:46:23 -07:00 committed by Linus Torvalds
parent e39bb6be9f
commit c574bbe917
8 changed files with 70 additions and 18 deletions

View File

@ -595,16 +595,23 @@ Documentation/admin-guide/kernel-parameters.rst).
numa_balancing numa_balancing
============== ==============
Enables/disables automatic page fault based NUMA memory Enables/disables and configures automatic page fault based NUMA memory
balancing. Memory is moved automatically to nodes balancing. Memory is moved automatically to nodes that access it often.
that access it often. The value to set can be the result of ORing the following:
Enables/disables automatic NUMA memory balancing. On NUMA machines, there = =================================
is a performance penalty if remote memory is accessed by a CPU. When this 0 NUMA_BALANCING_DISABLED
feature is enabled the kernel samples what task thread is accessing memory 1 NUMA_BALANCING_NORMAL
by periodically unmapping pages and later trapping a page fault. At the 2 NUMA_BALANCING_MEMORY_TIERING
time of the page fault, it is determined if the data being accessed should = =================================
be migrated to a local memory node.
Or NUMA_BALANCING_NORMAL to optimize page placement among different
NUMA nodes to reduce remote accessing. On NUMA machines, there is a
performance penalty if remote memory is accessed by a CPU. When this
feature is enabled the kernel samples what task thread is accessing
memory by periodically unmapping pages and later trapping a page
fault. At the time of the page fault, it is determined if the data
being accessed should be migrated to a local memory node.
The unmapping of pages and trapping faults incur additional overhead that The unmapping of pages and trapping faults incur additional overhead that
ideally is offset by improved memory locality but there is no universal ideally is offset by improved memory locality but there is no universal
@ -615,6 +622,10 @@ faults may be controlled by the `numa_balancing_scan_period_min_ms,
numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms, numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms,
numa_balancing_scan_size_mb`_, and numa_balancing_settle_count sysctls. numa_balancing_scan_size_mb`_, and numa_balancing_settle_count sysctls.
Or NUMA_BALANCING_MEMORY_TIERING to optimize page placement among
different types of memory (represented as different NUMA nodes) to
place the hot pages in the fast memory. This is implemented based on
unmapping and page fault too.
numa_balancing_scan_period_min_ms, numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms, numa_balancing_scan_size_mb numa_balancing_scan_period_min_ms, numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms, numa_balancing_scan_size_mb
=============================================================================================================================== ===============================================================================================================================

View File

@ -353,6 +353,7 @@ enum zone_watermarks {
WMARK_MIN, WMARK_MIN,
WMARK_LOW, WMARK_LOW,
WMARK_HIGH, WMARK_HIGH,
WMARK_PROMO,
NR_WMARK NR_WMARK
}; };

View File

@ -23,6 +23,16 @@ enum sched_tunable_scaling {
SCHED_TUNABLESCALING_END, SCHED_TUNABLESCALING_END,
}; };
#define NUMA_BALANCING_DISABLED 0x0
#define NUMA_BALANCING_NORMAL 0x1
#define NUMA_BALANCING_MEMORY_TIERING 0x2
#ifdef CONFIG_NUMA_BALANCING
extern int sysctl_numa_balancing_mode;
#else
#define sysctl_numa_balancing_mode 0
#endif
/* /*
* control realtime throttling: * control realtime throttling:
* *

View File

@ -4279,7 +4279,9 @@ DEFINE_STATIC_KEY_FALSE(sched_numa_balancing);
#ifdef CONFIG_NUMA_BALANCING #ifdef CONFIG_NUMA_BALANCING
void set_numabalancing_state(bool enabled) int sysctl_numa_balancing_mode;
static void __set_numabalancing_state(bool enabled)
{ {
if (enabled) if (enabled)
static_branch_enable(&sched_numa_balancing); static_branch_enable(&sched_numa_balancing);
@ -4287,13 +4289,22 @@ void set_numabalancing_state(bool enabled)
static_branch_disable(&sched_numa_balancing); static_branch_disable(&sched_numa_balancing);
} }
void set_numabalancing_state(bool enabled)
{
if (enabled)
sysctl_numa_balancing_mode = NUMA_BALANCING_NORMAL;
else
sysctl_numa_balancing_mode = NUMA_BALANCING_DISABLED;
__set_numabalancing_state(enabled);
}
#ifdef CONFIG_PROC_SYSCTL #ifdef CONFIG_PROC_SYSCTL
int sysctl_numa_balancing(struct ctl_table *table, int write, int sysctl_numa_balancing(struct ctl_table *table, int write,
void *buffer, size_t *lenp, loff_t *ppos) void *buffer, size_t *lenp, loff_t *ppos)
{ {
struct ctl_table t; struct ctl_table t;
int err; int err;
int state = static_branch_likely(&sched_numa_balancing); int state = sysctl_numa_balancing_mode;
if (write && !capable(CAP_SYS_ADMIN)) if (write && !capable(CAP_SYS_ADMIN))
return -EPERM; return -EPERM;
@ -4303,8 +4314,10 @@ int sysctl_numa_balancing(struct ctl_table *table, int write,
err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos); err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
if (err < 0) if (err < 0)
return err; return err;
if (write) if (write) {
set_numabalancing_state(state); sysctl_numa_balancing_mode = state;
__set_numabalancing_state(state);
}
return err; return err;
} }
#endif #endif

View File

@ -1696,7 +1696,7 @@ static struct ctl_table kern_table[] = {
.mode = 0644, .mode = 0644,
.proc_handler = sysctl_numa_balancing, .proc_handler = sysctl_numa_balancing,
.extra1 = SYSCTL_ZERO, .extra1 = SYSCTL_ZERO,
.extra2 = SYSCTL_ONE, .extra2 = SYSCTL_FOUR,
}, },
#endif /* CONFIG_NUMA_BALANCING */ #endif /* CONFIG_NUMA_BALANCING */
{ {

View File

@ -51,6 +51,7 @@
#include <linux/oom.h> #include <linux/oom.h>
#include <linux/memory.h> #include <linux/memory.h>
#include <linux/random.h> #include <linux/random.h>
#include <linux/sched/sysctl.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
@ -2031,16 +2032,27 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
{ {
int page_lru; int page_lru;
int nr_pages = thp_nr_pages(page); int nr_pages = thp_nr_pages(page);
int order = compound_order(page);
VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page); VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
/* Do not migrate THP mapped by multiple processes */ /* Do not migrate THP mapped by multiple processes */
if (PageTransHuge(page) && total_mapcount(page) > 1) if (PageTransHuge(page) && total_mapcount(page) > 1)
return 0; return 0;
/* Avoid migrating to a node that is nearly full */ /* Avoid migrating to a node that is nearly full */
if (!migrate_balanced_pgdat(pgdat, nr_pages)) if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
int z;
if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
return 0;
for (z = pgdat->nr_zones - 1; z >= 0; z--) {
if (populated_zone(pgdat->node_zones + z))
break;
}
wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
return 0; return 0;
}
if (isolate_lru_page(page)) if (isolate_lru_page(page))
return 0; return 0;

View File

@ -8441,7 +8441,8 @@ static void __setup_per_zone_wmarks(void)
zone->watermark_boost = 0; zone->watermark_boost = 0;
zone->_watermark[WMARK_LOW] = min_wmark_pages(zone) + tmp; zone->_watermark[WMARK_LOW] = min_wmark_pages(zone) + tmp;
zone->_watermark[WMARK_HIGH] = min_wmark_pages(zone) + tmp * 2; zone->_watermark[WMARK_HIGH] = low_wmark_pages(zone) + tmp;
zone->_watermark[WMARK_PROMO] = high_wmark_pages(zone) + tmp;
spin_unlock_irqrestore(&zone->lock, flags); spin_unlock_irqrestore(&zone->lock, flags);
} }

View File

@ -56,6 +56,7 @@
#include <linux/swapops.h> #include <linux/swapops.h>
#include <linux/balloon_compaction.h> #include <linux/balloon_compaction.h>
#include <linux/sched/sysctl.h>
#include "internal.h" #include "internal.h"
@ -3895,7 +3896,10 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx)
if (!managed_zone(zone)) if (!managed_zone(zone))
continue; continue;
mark = high_wmark_pages(zone); if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING)
mark = wmark_pages(zone, WMARK_PROMO);
else
mark = high_wmark_pages(zone);
if (zone_watermark_ok_safe(zone, order, mark, highest_zoneidx)) if (zone_watermark_ok_safe(zone, order, mark, highest_zoneidx))
return true; return true;
} }