lkml.org 
[lkml]   [2014]   [Feb]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] mm: per-thread vma caching
From
On Mon, Feb 24, 2014 at 5:16 PM, Davidlohr Bueso <davidlohr@hp.com> wrote:
>
> If we add the two missing bits to the shifting and use PAGE_SHIFT (x86
> at least) we get just as good results as with 10. So we would probably
> prefer hashing based on the page number and not some offset within the
> page.

So just

int idx = (addr >> PAGE_SHIFT) & 3;

works fine?

That makes me think it all just wants to be maximally spread out to
approximate some NRU when adding an entry.

Also, as far as I can tell, "vmacache_update()" should then become
just a simple unconditional

int idx = (addr >> PAGE_SHIFT) & 3;
current->vmacache[idx] = newvma;

because your original code did

+ if (curr->vmacache[idx] != newvma)
+ curr->vmacache[idx] = newvma;

and that doesn't seem to make sense, since if "newvma" was already in
the cache, then we would have found it when looking up, and we
wouldn't be here updating it after doing the rb-walk? And with the
per-mm cache removed, all that should remain is that simple version,
no? You don't even need the "check the vmcache sequence number and
clear if bogus", because the rule should be that you have always done
a "vmcache_find()" first, which should have done that..

Anyway, can you send the final cleaned-up and simplfied (and
re-tested) version? There's enough changes discussed here that I don't
want to track the end result mentally..

Linus


\
 
 \ /
  Last update: 2014-02-25 03:21    [W:0.111 / U:0.140 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site