lkml.org 
[lkml]   [2022]   [May]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: RFC: Memory Tiering Kernel Interfaces
On Thu, May 5, 2022 at 7:24 AM Dave Hansen <dave.hansen@intel.com> wrote:
>
> On 5/4/22 23:35, Wei Xu wrote:
> > On Wed, May 4, 2022 at 10:02 AM Dave Hansen <dave.hansen@intel.com> wrote:
> >> That means a lot of page table and EPT walks to map those linear
> >> addresses back to physical. That adds to the inefficiency.
> >
> > That's true if the tracking is purely based on physical pages. For
> > hot page tracking from PEBS, we can consider tracking in
> > virtual/linear addresses. We don't need to maintain the history for
> > all linear page addresses nor for an indefinite amount of time. After
> > all, we just need to identify pages accessed frequently recently and
> > promote them.
>
> Except that you don't want to promote on *every* access. That might
> lead to too much churn.

Certainly. We should use the PMU events to help build the page
heatmap in software and select the hottest pages to promote
accordingly.

> You're also assuming that all accesses to a physical page are via a
> single linear address, which ignores shared memory mapped at different
> linear addresses. Our (maybe wrong) assumption has been that shared
> memory is important enough to manage that it can't be ignored.

Shared memory is important. Special handling will be needed to better
support such pages for linear address based hot page tracking.

> >> In the end, you get big PEBS buffers with lots of irrelevant data that
> >> needs significant post-processing to make sense of it.
> >
> > I am curious about what are "lots of irrelevant data" if PEBS data is
> > filtered on data sources (e.g. DRAM vs PMEM) by hardware. If we need
> > to have different policies for the pages from the same data source,
> > then I agree that the software has to do a lot of filtering work.
>
> Perhaps "irrelevant" was a bad term to use. I meant that you can't just
> take the PEBS data and act directly on it. It has to be post-processed
> and you will see things in there like lots of adjacent accesses to a
> page. Those additional accesses can be interesting but at some point
> you have all the weight you need to promote the page and the _rest_ are
> irrelevant.

That's right. The software has to do the post-processing work to build
the page heatmap with what the existing hardware can provide.

> >> The folks at Intel that tried this really struggled to take this mess and turn it into a successful hot-page tracking.
> >>
> >> Maybe someone else will find a better way to do it, but we tried and
> >> gave up.
> >
> > It might be challenging to use PEBS as the only and universal hot page
> > tracking hardware mechanism. For example, there are challenges to use
> > PEBS to sample KVM guest accesses from the host.
>
> Yep, agreed. This aspect of the hardware is very painful at the moment.
>
> > On the other hand, PEBS with hardware-based data source filtering can
> > be a useful mechanism to improve hot page tracking in conjunction
> > with other techniques.
>
> Rather than "can", I'd say: "might". Backing up to what I said originally:
>
> > So, in practice, these events (PEBS) weren't very useful
> > for driving memory tiering.
>
> By "driving" I really meant solely driving. Like, can PEBS be used as
> the one and only mechanism? We couldn't make it work. But, the
> hardware _is_ sitting there mostly unused. It might be great to augment
> what is there, and nobody should be discouraged from looking at it again.

I think we are on the same page.

\
 
 \ /
  Last update: 2022-05-10 06:44    [W:0.092 / U:0.436 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site