lkml.org 
[lkml]   [2022]   [Aug]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3] mm: add thp_utilization metrics to debugfs
On Wed, 17 Aug 2022 17:01:12 -0700 <alexlzhu@fb.com> wrote:

> THPs have historically been enabled on a per application basis due to
> performance increase or decrease depending on how the particular
> application uses physical memory. When THPs are heavily utilized,
> application performance improves due to fewer TLB cache misses.
> It has long been suspected that performance regressions when THP
> is enabled happens due to heavily underutilized anonymous THPs.
>
> Previously there was no way to track how much of a THP is
> actually being used. With this change, we seek to gain visibility
> into the utilization of THPs in order to make more intelligent
> decisions regarding paging.
>
> This change introduces a tool that scans through all of physical
> memory for anonymous THPs and groups them into buckets based
> on utilization. It also includes an interface under
> /sys/kernel/debug/thp_utilization.
>
> Utilization of a THP is defined as the percentage of nonzero
> pages in the THP. The worker thread will scan through all
> of physical memory and obtain utilization of all anonymous
> THPs. It will gather this information by periodically scanning
> through all of physical memory for anonymous THPs, group them
> into buckets based on utilization, and report utilization

I'd like to see sample debugfs output right here in the changelog, for
reviewers to review. In some detail.

And I'd like to see the code commented! Especially
thp_utilization_workfn(), thp_util_scan() and thp_scan_next_zone().
What are their roles and responsibilities? How long do they take, by
what means do they scan?

I mean, scanning all of physical memory is a huge task. How do we
avoid chewing vast amounts of CPU? What is the chosen approach and
what are the tradeoffs? Why is is done within a kernel thread at all,
rather than putting the load into the context of the reader of the
stats (which is more appropriate). etcetera. There are many traps,
tradeoffs and hidden design decisions here. Please unhide them.

This comment, which is rather a core part of these tradeoffs:

+/*
+ * The number of addresses to scan through on each periodic
+ * run of the scanner that generates /sys/kernel/debug/thp_utilization.
+ */
+#define THP_UTIL_SCAN_SIZE 256

isn't very helpful. "number of addresses"? Does it mean we scan 256
bytes at a time? 256 pages? 256 hugepages? Something else?

How can any constant make sense when different architectures have
different [huge]page sizes? Should it be scaled by pagesize? And if
we're going to do that, we should scale it by CPU speed at the same time.

Or bypass all of that and simply scan for a certain amount of *time*,
rather than scan a certain amount of memory. After all, chunking up
the scan time is what we're trying to achieve by chunking up the scan
amount. Why not chunk up the scan time directly?

See where I'm going? I see many hidden assumptions, design decisions
and tradeoffs here. Can we please attempt to spell them out and review
them.

Anyway. There were many review comments on previous versions. It
would have been better had those reviewers been cc'ed on this version.
I'll go into hiding and see what people think.

\
 
 \ /
  Last update: 2022-08-21 00:08    [W:0.046 / U:0.156 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site