lkml.org 
[lkml]   [2022]   [Jul]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH v8 00/12] mm/demotion: Memory tiers and demotion
Date

Matthew Wilcox <willy@infradead.org> writes:

> On Mon, Jul 04, 2022 at 12:36:00PM +0530, Aneesh Kumar K.V wrote:
>> * The current tier initialization code always initializes
>> each memory-only NUMA node into a lower tier. But a memory-only
>> NUMA node may have a high performance memory device (e.g. a DRAM
>> device attached via CXL.mem or a DRAM-backed memory-only node on
>> a virtual machine) and should be put into a higher tier.
>>
>> * The current tier hierarchy always puts CPU nodes into the top
>> tier. But on a system with HBM (e.g. GPU memory) devices, these
>> memory-only HBM NUMA nodes should be in the top tier, and DRAM nodes
>> with CPUs are better to be placed into the next lower tier.
>
> These things that you identify as problems seem perfectly sensible to me.
> Memory which is attached to this CPU has the lowest latency and should
> be preferred over more remote memory, no matter its bandwidth.

It is a problem because HBM NUMA node memory is generally also used by
some kind of device/accelerator (eg. GPU). Typically users would prefer
to keep HBM memory for use by the accelerator rather than random pages
demoted from the CPU as accelerators have orders of magnitude better
performance when accessing local HBM vs. remote memory.

\
 
 \ /
  Last update: 2022-07-05 06:13    [W:0.212 / U:0.264 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site