lkml.org 
[lkml]   [2015]   [Jun]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/3] TLB flush multiple pages per IPI v5
On 25.6.2015 20:36, Linus Torvalds wrote:
>
> On Jun 25, 2015 04:48, "Ingo Molnar" <mingo@kernel.org
> <mailto:mingo@kernel.org>> wrote:
>>
>> - 1x, 2x, 3x, 4x means up to 4 adjacent 4K vmalloc()-ed pages are accessed, the
>> first byte in each
>
> So that test is a bit unfair. From previous timing of Intel TLB fills, I can
> tell you that Intel is particularly good at doing adjacent entries.
>
> That's independent of the fact that page tables have very good locality (if they
> are the radix tree type - the hashed page tables that ppc uses are shit). So
> when filling adjacent entries, you take the cache misses for the page tables
> only once, but even aside from that, Intel send to do particularly well at the
> "next page" TLB fill case

AFAIK that's because they also cache partial translations, so if the first 3
levels are the same (as they mostly are for the "next page" scenario) it will
only have to look at the last level of pages tables. AMD does that too.

> Now, I think that's a reasonably common case, and I'm not saying that it's
> unfair to compare for that reason, but it does highlight the good case for TLB
> walking.
>
> So I would suggest you highlight the bad case too: use invlpg to invalidate
> *one* TLB entry, and then walk four non-adjacent entries. And compare *that* to
> the full TLB flush.
>
> Now, I happen to still believe in the full flush, but let's not pick benchmarks
> that might not show the advantages of the finer granularity.
>
> Linus
>



\
 
 \ /
  Last update: 2015-06-25 21:21    [W:0.099 / U:0.652 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site