[lkml]   [2014]   [Feb]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
SubjectRe: [PATCH] mm: per-thread vma caching
On Fri, 2014-02-21 at 13:18 -0800, Linus Torvalds wrote:
> On Fri, Feb 21, 2014 at 12:53 PM, Davidlohr Bueso <> wrote:
> >
> > I think you are right. I just reran some of the tests and things are
> > pretty much the same, so we could get rid of it.
> Ok, I'd prefer the simpler model of just a single per-thread hashed
> lookup, and then we could perhaps try something more complex if there
> are particular loads that really matter. I suspect there is more
> upside to playing with the hashing of the per-thread cache (making it
> three bits, whatever) than with some global thing.
> >> Also, the hash you use for the vmacache index is *particularly* odd.
> >>
> >> int idx = (addr >> 10) & 3;
> >>
> >> you're using the top two bits of the address *within* the page.
> >> There's a lot of places that round addresses down to pages, and in
> >> general it just looks really odd to use an offset within a page as an
> >> index, since in some patterns (linear accesses, whatever), the page
> >> faults will always be to the beginning of the page, so index 0 ends up
> >> being special.
> >
> > Ah, this comes from tediously looking at access patterns. I actually
> > printed pages of them. I agree that it is weird, and I'm by no means
> > against changing it. However, the results are just too good, specially
> > for ebizzy, so I decided to keep it, at least for now. I am open to
> > alternatives.
> Hmm. Numbers talk, bullshit walks. So if you have the numbers that say
> this is actually a good model..

If we add the two missing bits to the shifting and use PAGE_SHIFT (x86
at least) we get just as good results as with 10. So we would probably
prefer hashing based on the page number and not some offset within the


 \ /
  Last update: 2014-02-25 03:01    [W:0.097 / U:1.068 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site