lkml.org 
[lkml]   [2022]   [Apr]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2 0/5] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
On Mon, 25 Apr 2022, Aneesh Kumar K V wrote:

>On 4/25/22 11:40 AM, ying.huang@intel.com wrote:
>>On Mon, 2022-04-25 at 09:20 +0530, Aneesh Kumar K.V wrote:
>>>"ying.huang@intel.com" <ying.huang@intel.com> writes:
>>>
>>>>Hi, All,
>>>>
>>>>On Fri, 2022-04-22 at 16:30 +0530, Jagdish Gediya wrote:
>>>>
>>>>[snip]
>>>>
>>>>>I think it is necessary to either have per node demotion targets
>>>>>configuration or the user space interface supported by this patch
>>>>>series. As we don't have clear consensus on how the user interface
>>>>>should look like, we can defer the per node demotion target set
>>>>>interface to future until the real need arises.
>>>>>
>>>>>Current patch series sets N_DEMOTION_TARGET from dax device kmem
>>>>>driver, it may be possible that some memory node desired as demotion
>>>>>target is not detected in the system from dax-device kmem probe path.
>>>>>
>>>>>It is also possible that some of the dax-devices are not preferred as
>>>>>demotion target e.g. HBM, for such devices, node shouldn't be set to
>>>>>N_DEMOTION_TARGETS. In future, Support should be added to distinguish
>>>>>such dax-devices and not mark them as N_DEMOTION_TARGETS from the
>>>>>kernel, but for now this user space interface will be useful to avoid
>>>>>such devices as demotion targets.
>>>>>
>>>>>We can add read only interface to view per node demotion targets
>>>>>from /sys/devices/system/node/nodeX/demotion_targets, remove
>>>>>duplicated /sys/kernel/mm/numa/demotion_target interface and instead
>>>>>make /sys/devices/system/node/demotion_targets writable.
>>>>>
>>>>>Huang, Wei, Yang,
>>>>>What do you suggest?
>>>>
>>>>We cannot remove a kernel ABI in practice. So we need to make it right
>>>>at the first time. Let's try to collect some information for the kernel
>>>>ABI definitation.
>>>>
>>>>The below is just a starting point, please add your requirements.
>>>>
>>>>1. Jagdish has some machines with DRAM only NUMA nodes, but they don't
>>>>want to use that as the demotion targets. But I don't think this is a
>>>>issue in practice for now, because demote-in-reclaim is disabled by
>>>>default.
>>>
>>>It is not just that the demotion can be disabled. We should be able to
>>>use demotion on a system where we can find DRAM only NUMA nodes. That
>>>cannot be achieved by /sys/kernel/mm/numa/demotion_enabled. It needs
>>>something similar to to N_DEMOTION_TARGETS
>>>
>>
>>Can you show NUMA information of your machines with DRAM-only nodes and
>>PMEM nodes? We can try to find the proper demotion order for the
>>system. If you can not show it, we can defer N_DEMOTION_TARGETS until
>>the machine is available.
>
>
>Sure will find one such config. As you might have noticed this is very
>easy to have in a virtualization setup because the hypervisor can
>assign memory to a guest VM from a numa node that doesn't have CPU
>assigned to the same guest. This depends on the other guest VM
>instance config running on the system. So on any virtualization config
>that has got persistent memory attached, this can become an easy
>config to end up with.

And as hw becomes available things like CXL will also start to show
"interesting" setups. You have a mix of volatile and/or pmem nodes
with different access costs, so: CPU+DRAM, DRAM (?), volatile CXL mem,
CXL pmem, non-cxl pmem.

imo, by default, slower mem should be demotion candidates regardless of
type or socket layout (which can be a last consideration such that this
is somewhat mitigated). And afaict this is along the lines of what Jagdish's
first example refers to in patch 1/5.

>
>>>>2. For machines with PMEM installed in only 1 of 2 sockets, for example,
>>>>
>>>>Node 0 & 2 are cpu + dram nodes and node 1 are slow
>>>>memory node near node 0,
>>>>
>>>>available: 3 nodes (0-2)
>>>>node 0 cpus: 0 1
>>>>node 0 size: n MB
>>>>node 0 free: n MB
>>>>node 1 cpus:
>>>>node 1 size: n MB
>>>>node 1 free: n MB
>>>>node 2 cpus: 2 3
>>>>node 2 size: n MB
>>>>node 2 free: n MB
>>>>node distances:
>>>>node 0 1 2
>>>>   0: 10 40 20
>>>>   1: 40 10 80
>>>>   2: 20 80 10
>>>>
>>>>We have 2 choices,
>>>>
>>>>a)
>>>>node demotion targets
>>>>0 1
>>>>2 1
>>>
>>>This is achieved by
>>>
>>>[PATCH v2 1/5] mm: demotion: Set demotion list differently

Yes, I think it makes sense to do 2a.

Thanks,
Davidlohr

\
 
 \ /
  Last update: 2022-04-25 22:19    [W:0.276 / U:0.072 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site