lkml.org 
[lkml]   [2014]   [Nov]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [RFC PATCH 1/2] zap_pte_range: update addr when forcing flush after TLB batching faiure
From
On Thu, Nov 6, 2014 at 10:38 AM, Catalin Marinas
<catalin.marinas@arm.com> wrote:
> On Thu, Nov 06, 2014 at 05:53:58PM +0000, Linus Torvalds wrote:
>
> Sorry, I wasn't clear enough about the "increments" part. I agreed with
> not using end = start + PMD_SIZE/PAGE_SIZE from your previous email
> already.

Ahh, I misunderstood. You're really just after the granularity of tlb flushes.

That's fine. That makes sense. In fact, how about adding "granularity"
to the mmu_gather structure, and then doing:\

- in __tlb_reset_range(), setting it to ~0ul

- add "granularity" to __tlb_adjust_range(), and make it do something like

if (!tlb->fullmm) {
tlb->granularity = min(tlb->granularity, granularity);
tlb->start = min(tlb->start, address);
tlb->end = max(tlb->end, address+1);
}

and then the TLB flush logic would basically do

address = tlb->start;
do {
flush(address);
if (address + tlb->granularity < address)
break;
address = address + tlb->granularity;
} while (address < tlb->end);

or something like that.

Now, if you unmap mixed ranges of large-pages and regular pages, you'd
still have that granularity of one page, but quite frankly, if you do
that, you probably deserve it. The common case is almost certainly
going to be just "unmap large pages" or "unmap normal pages".

And if it turns out that I'm completely wrong, and mixed granularities
are common, maybe there could be some hack in the "tlb->granularity"
calculations that just forces a TLB flush when the granularity
changes.

Hmm?

Linus


\
 
 \ /
  Last update: 2014-11-06 23:01    [W:1.372 / U:0.388 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site