lkml.org 
[lkml]   [2022]   [Jun]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v5 1/9] mm/demotion: Add support for explicit memory tiers
On Thu, 9 Jun 2022 16:41:04 -0400
Johannes Weiner <hannes@cmpxchg.org> wrote:

> On Thu, Jun 09, 2022 at 03:22:43PM +0100, Jonathan Cameron wrote:
> > I think discussion hinged on it making sense to be able to change
> > rank of a tier rather than create a new tier and move things one by one.
> > Example was wanting to change the rank of a tier that was created
> > either by core code or a subsystem.
> >
> > E.g. If GPU driver creates a tier, assumption is all similar GPUs will
> > default to the same tier (if hot plugged later for example) as the
> > driver subsystem will keep a reference to the created tier.
> > Hence if user wants to change the order of that relative to
> > other tiers, the option of creating a new tier and moving the
> > devices would then require us to have infrastructure to tell the GPU
> > driver to now use the new tier for additional devices.
>
> That's an interesting point, thanks for explaining.
>
> But that could still happen when two drivers report the same tier and
> one of them is wrong, right? You'd still need to separate out by hand
> to adjust rank, as well as handle hotplug events. Driver colllisions
> are probable with coarse categories like gpu, dram, pmem.

There will always be cases that need hand tweaking. Also I'd envision
some driver subsystems being clever enough to manage several tiers and
use the information available to them to assign appropriately. This
is definitely true for CXL 2.0+ devices where we can have radically
different device types under the same driver (volatile, persistent,
direct connect, behind switches etc). There will be some interesting
choices to make on groupings in big systems as we don't want too many
tiers unless we naturally demote multiple levels in one go..

>
> Would it make more sense to have the platform/devicetree/driver
> provide more fine-grained distance values similar to NUMA distances,
> and have a driver-scope tunable to override/correct? And then have the
> distance value function as the unique tier ID and rank in one.

Absolutely a good thing to provide that information, but it's black
magic. There are too many contradicting metrics (latency vs bandwidth etc)
even not including a more complex system model like Jerome Glisse proposed
a few years back. https://lore.kernel.org/all/20190118174512.GA3060@redhat.com/
CXL 2.0 got this more right than anything else I've seen as provides
discoverable topology along with details like latency to cross between
particular switch ports. Actually using that data (other than by throwing
it to userspace controls for HPC apps etc) is going to take some figuring out.
Even the question of what + how we expose this info to userspace is non
obvious.

The 'right' decision is also usecase specific, so what you'd do for
particular memory characteristics for a GPU are not the same as what
you'd do for the same characteristics on a memory only device.

>
> That would allow device class reassignments, too, and it would work
> with driver collisions where simple "tier stickiness" would
> not. (Although collisions would be less likely to begin with given a
> broader range of possible distance values.)

I think we definitely need the option to move individual nodes (in this
case nodes map to individual devices if characteristics vary between them)
around as well, but I think that's somewhat orthogonal to a good first guess.

>
> Going further, it could be useful to separate the business of hardware
> properties (and configuring quirks) from the business of configuring
> MM policies that should be applied to the resulting tier hierarchy.
> They're somewhat orthogonal tuning tasks, and one of them might become
> obsolete before the other (if the quality of distance values provided
> by drivers improves before the quality of MM heuristics ;). Separating
> them might help clarify the interface for both designers and users.
>
> E.g. a memdev class scope with a driver-wide distance value, and a
> memdev scope for per-device values that default to "inherit driver
> value". The memtier subtree would then have an r/o structure, but
> allow tuning per-tier interleaving ratio[1], demotion rules etc.

Ok that makes sense. I'm not sure if that ends up as an implementation
detail, or effects the userspace interface of this particular element.

I'm not sure completely read only is flexible enough (though mostly RO is fine)
as we keep sketching out cases where any attempt to do things automatically
does the wrong thing and where we need to add an extra tier to get
everything to work. Short of having a lot of tiers I'm not sure how
we could have the default work well. Maybe a lot of "tiers" is fine
though perhaps we need to rename them if going this way and then they
don't really work as current concept of tier.

Imagine a system with subtle difference between different memories such
as 10% latency increase for same bandwidth. To get an advantage from
demoting to such a tier will require really stable usage and long
run times. Whilst you could design a demotion scheme that takes that
into account, I think we are a long way from that today.

Jonathan


>
> [1] https://lore.kernel.org/linux-mm/20220607171949.85796-1-hannes@cmpxchg.org/#t

\
 
 \ /
  Last update: 2022-06-10 11:58    [W:0.167 / U:0.016 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site