lkml.org 
[lkml]   [2013]   [Nov]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch 0/9] mm: thrash detection-based file cache sizing v6
On Mon, Nov 25, 2013 at 04:57:29PM -0800, Andrew Morton wrote:
> On Sun, 24 Nov 2013 18:38:19 -0500 Johannes Weiner <hannes@cmpxchg.org> wrote:
>
> > This series solves the problem by maintaining a history of pages
> > evicted from the inactive list, enabling the VM to detect frequently
> > used pages regardless of inactive list size and facilitate working set
> > transitions.
>
> It's a very readable patchset - thanks for taking the time to do that.

Thanks.

> > 31 files changed, 1253 insertions(+), 401 deletions(-)
>
> It's also a *ton* of stuff. More code complexity, larger kernel data
> structures. All to address a quite narrow class of workloads on a
> relatively small window of machine sizes. How on earth do we decide
> whether it's worth doing?

The fileserver-type workload is not that unusual and not really
restricted to certain machine sizes.

But more importantly, these are reasonable workloads for which our
cache management fails completely, and we have no alternative solution
to offer. What do we tell the people running these loads?

> Also, what's the memcg angle? This is presently a global thing - do
> you think we're likely to want to make it per-memcg in the future?

Yes, it seemed easier to get the global case working first, but the
whole thing is designed with memcg in mind. We can encode the unique
cgroup ID in the shadow entries as well and make the inactive_age per
lruvec instead of per-zone.

If space gets tight in the shadow entry (on 32 bit e.g.), instead of
counting every single eviction, we can group evictions into
generations of bigger chunks - the more memory, the less accurate the
refault distance has to be anyway - and can then get away with fewer
bits for the eviction timestamp.


\
 
 \ /
  Last update: 2013-11-27 00:01    [W:1.958 / U:0.408 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site