lkml.org 
[lkml]   [2022]   [Feb]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH -V13 2/3] NUMA balancing: optimize page placement for memory tiering system
On Mon, Feb 21, 2022 at 04:45:28PM +0800, Huang Ying wrote:
> With the advent of various new memory types, some machines will have
> multiple types of memory, e.g. DRAM and PMEM (persistent memory). The
> memory subsystem of these machines can be called memory tiering
> system, because the performance of the different types of memory are
> usually different.
>
> In such system, because of the memory accessing pattern changing etc,
> some pages in the slow memory may become hot globally. So in this
> patch, the NUMA balancing mechanism is enhanced to optimize the page
> placement among the different memory types according to hot/cold
> dynamically.
>
> In a typical memory tiering system, there are CPUs, fast memory and
> slow memory in each physical NUMA node. The CPUs and the fast memory
> will be put in one logical node (called fast memory node), while the
> slow memory will be put in another (faked) logical node (called slow
> memory node). That is, the fast memory is regarded as local while the
> slow memory is regarded as remote. So it's possible for the recently
> accessed pages in the slow memory node to be promoted to the fast
> memory node via the existing NUMA balancing mechanism.
>
> The original NUMA balancing mechanism will stop to migrate pages if
> the free memory of the target node becomes below the high watermark.
> This is a reasonable policy if there's only one memory type. But this
> makes the original NUMA balancing mechanism almost do not work to
> optimize page placement among different memory types. Details are as
> follows.
>
> It's the common cases that the working-set size of the workload is
> larger than the size of the fast memory nodes. Otherwise, it's
> unnecessary to use the slow memory at all. So, there are almost
> always no enough free pages in the fast memory nodes, so that the
> globally hot pages in the slow memory node cannot be promoted to the
> fast memory node. To solve the issue, we have 2 choices as follows,
>
> a. Ignore the free pages watermark checking when promoting hot pages
> from the slow memory node to the fast memory node. This will
> create some memory pressure in the fast memory node, thus trigger
> the memory reclaiming. So that, the cold pages in the fast memory
> node will be demoted to the slow memory node.
>
> b. Make kswapd of the fast memory node to reclaim pages until the free
> pages are a little more than the high watermark (named as promo
> watermark). Then, if the free pages of the fast memory node reaches
> high watermark, and some hot pages need to be promoted, kswapd of the
> fast memory node will be waken up to demote more cold pages in the
> fast memory node to the slow memory node. This will free some extra
> space in the fast memory node, so the hot pages in the slow memory
> node can be promoted to the fast memory node.
>
> The choice "a" may create high memory pressure in the fast memory
> node. If the memory pressure of the workload is high, the memory
> pressure may become so high that the memory allocation latency of the
> workload is influenced, e.g. the direct reclaiming may be triggered.
>
> The choice "b" works much better at this aspect. If the memory
> pressure of the workload is high, the hot pages promotion will stop
> earlier because its allocation watermark is higher than that of the
> normal memory allocation. So in this patch, choice "b" is
> implemented. A new zone watermark (WMARK_PROMO) is added. Which is
> larger than the high watermark and can be controlled via
> watermark_scale_factor.
>
> In addition to the original page placement optimization among sockets,
> the NUMA balancing mechanism is extended to be used to optimize page
> placement according to hot/cold among different memory types. So the
> sysctl user space interface (numa_balancing) is extended in a backward
> compatible way as follow, so that the users can enable/disable these
> functionality individually.
>
> The sysctl is converted from a Boolean value to a bits field. The
> definition of the flags is,
>
> - 0: NUMA_BALANCING_DISABLED
> - 1: NUMA_BALANCING_NORMAL
> - 2: NUMA_BALANCING_MEMORY_TIERING
>
> We have tested the patch with the pmbench memory accessing benchmark
> with the 80:20 read/write ratio and the Gauss access address
> distribution on a 2 socket Intel server with Optane DC Persistent
> Memory Model. The test results shows that the pmbench score can
> improve up to 95.9%.
>
> Thanks Andrew Morton to help fix the document format error.
>
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Rik van Riel <riel@surriel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Yang Shi <shy828301@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Wei Xu <weixugc@google.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Shakeel Butt <shakeelb@google.com>
> Cc: zhongjiang-ali <zhongjiang-ali@linux.alibaba.com>
> Cc: Randy Dunlap <rdunlap@infradead.org>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: linux-kernel@vger.kernel.org
> Cc: linux-mm@kvack.org

Looks good to me,

Acked-by: Johannes Weiner <hannes@cmpxchg.org>

\
 
 \ /
  Last update: 2022-02-22 17:36    [W:0.090 / U:0.872 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site