lkml.org 
[lkml]   [2022]   [May]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: RFC: Memory Tiering Kernel Interfaces
On Wed, May 11, 2022 at 8:14 PM ying.huang@intel.com
<ying.huang@intel.com> wrote:
>
> On Wed, 2022-05-11 at 19:39 -0700, Wei Xu wrote:
> > On Wed, May 11, 2022 at 6:42 PM ying.huang@intel.com
> > <ying.huang@intel.com> wrote:
> > >
> > > On Wed, 2022-05-11 at 10:07 -0700, Wei Xu wrote:
> > > > On Wed, May 11, 2022 at 12:49 AM ying.huang@intel.com
> > > > <ying.huang@intel.com> wrote:
> > > > >
> > > > > On Tue, 2022-05-10 at 22:30 -0700, Wei Xu wrote:
> > > > > > On Tue, May 10, 2022 at 4:38 AM Aneesh Kumar K.V
> > > > > > <aneesh.kumar@linux.ibm.com> wrote:
> > > > > > >
> > > > > > > Alistair Popple <apopple@nvidia.com> writes:
> > > > > > >
> > > > > > > > Wei Xu <weixugc@google.com> writes:
> > > > > > > >
> > > > > > > > > On Thu, May 5, 2022 at 5:19 PM Alistair Popple <apopple@nvidia.com> wrote:
> > > > > > > > > >
> > > > > > > > > > Wei Xu <weixugc@google.com> writes:
> > > > > > > > > >
> > > > > > > > > > [...]
> > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > Tiering Hierarchy Initialization
> > > > > > > > > > > > > `=============================='
> > > > > > > > > > > > >
> > > > > > > > > > > > > By default, all memory nodes are in the top tier (N_TOPTIER_MEMORY).
> > > > > > > > > > > > >
> > > > > > > > > > > > > A device driver can remove its memory nodes from the top tier, e.g.
> > > > > > > > > > > > > a dax driver can remove PMEM nodes from the top tier.
> > > > > > > > > > > >
> > > > > > > > > > > > With the topology built by firmware we should not need this.
> > > > > > > > > >
> > > > > > > > > > I agree that in an ideal world the hierarchy should be built by firmware based
> > > > > > > > > > on something like the HMAT. But I also think being able to override this will be
> > > > > > > > > > useful in getting there. Therefore a way of overriding the generated hierarchy
> > > > > > > > > > would be good, either via sysfs or kernel boot parameter if we don't want to
> > > > > > > > > > commit to a particular user interface now.
> > > > > > > > > >
> > > > > > > > > > However I'm less sure letting device-drivers override this is a good idea. How
> > > > > > > > > > for example would a GPU driver make sure it's node is in the top tier? By moving
> > > > > > > > > > every node that the driver does not know about out of N_TOPTIER_MEMORY? That
> > > > > > > > > > could get messy if say there were two drivers both of which wanted their node to
> > > > > > > > > > be in the top tier.
> > > > > > > > >
> > > > > > > > > The suggestion is to allow a device driver to opt out its memory
> > > > > > > > > devices from the top-tier, not the other way around.
> > > > > > > >
> > > > > > > > So how would demotion work in the case of accelerators then? In that
> > > > > > > > case we would want GPU memory to demote to DRAM, but that won't happen
> > > > > > > > if both DRAM and GPU memory are in N_TOPTIER_MEMORY and it seems the
> > > > > > > > only override available with this proposal would move GPU memory into a
> > > > > > > > lower tier, which is the opposite of what's needed there.
> > > > > > >
> > > > > > > How about we do 3 tiers now. dax kmem devices can be registered to
> > > > > > > tier 3. By default all numa nodes can be registered at tier 2 and HBM or
> > > > > > > GPU can be enabled to register at tier 1. ?
> > > > > >
> > > > > > This makes sense. I will send an updated RFC based on the discussions so far.
> > > > >
> > > > > Are these tier number fixed? If so, it appears strange that the
> > > > > smallest tier number is 0 on some machines, but 1 on some other
> > > > > machines.
> > > >
> > > > When the kernel is configured to allow 3 tiers, we can always show all
> > > > the 3 tiers. It is just that some tiers (e.g. tier 0) may be empty on
> > > > some machines.
> > >
> > > I still think that it's better to have no empty tiers for auto-generated
> > > memory tiers by kernel. Yes, the tier number will be not absolutely
> > > stable, but that only happens during system bootup in practice, so it's
> > > not a big issue IMHO.
> >
> > It should not be hard to hide empty tiers (e.g. tier-0) if we prefer.
> > But even if tier-0 is empty, we should still keep this tier in the
> > kernel and not move DRAM nodes into this tier. One reason is that a
> > HBM node might be hot-added into tier-0 at a later time.
> >
>
> Yes. The in-kernel representation and the user space interface could be
> different.
>
> I have thought something like below. We always make the main memory
> (DRAM here, CPU local) as tier 0. Then the slower memory will be
> positive, tier 1, 2, 3, ..., and the faster memory will be negative,
> tier -1, -2, -3, .... Then, GPU driver can regesiter its memory as tier
> -1. And the tier number could be more stable. But I'm not sure whether
> users will be happy with negtive tier number.
>
> > > And, I still think it's better to make only N-1 tiers writable for
> > > totally N tiers (or even readable). Considering "tier0" is written, how
> > > to deal with nodes in "tier0" before but not after writing? One
> > > possible way is to put them into "tierN". And during a user customize
> > > the tiers, the union of "N tiers" may be not complete.
> >
> > The sysfs interfaces that I have in mind now are:
> >
> > * /sys/devices/system/memtier/memtierN/nodelist (N=0, 1, 2)
> >
> > This is read-only to list the memory nodes for a specific tier.
> >
> > * /sys/devices/system/node/nodeN/memtier. (N=0, 1, ...,)
> >
> > This is a read-write interface. When written, the kernel moves the
> > node into the user-specified tier. No other nodes are affected.
> >
> > This interface should be able to avoid the above issue.
>
> Yes. This works too.

FYI, I have just sent out an updated RFC with the above sysfs interfaces.

> Best Regards,
> Huang, Ying
>
> > > > BTW, the userspace should not assume a specific meaning of a
> > > > particular tier id because it can change depending on the number of
> > > > tiers that the kernel is configured with. For example, the userspace
> > > > should not assume that tier-2 always means PMEM nodes. In a system
> > > > with 4 tiers, PMEM nodes may be in tier-3, not tier-2.
> > >
> > > Yes. This sounds good.
> > >
> > > Best Regards,
> > > Huang, Ying
> > >
>
>

\
 
 \ /
  Last update: 2022-05-12 08:26    [W:0.058 / U:1.036 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site