lkml.org 
[lkml]   [2019]   [May]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush
On Mon, May 13, 2019 at 10:36:06AM +0200, Peter Zijlstra wrote:
> On Thu, May 09, 2019 at 09:21:35PM +0000, Nadav Amit wrote:
> > It may be possible to avoid false-positive nesting indications (when the
> > flushes do not overlap) by creating a new struct mmu_gather_pending, with
> > something like:
> >
> > struct mmu_gather_pending {
> > u64 start;
> > u64 end;
> > struct mmu_gather_pending *next;
> > }
> >
> > tlb_finish_mmu() would then iterate over the mm->mmu_gather_pending
> > (pointing to the linked list) and find whether there is any overlap. This
> > would still require synchronization (acquiring a lock when allocating and
> > deallocating or something fancier).
>
> We have an interval_tree for this, and yes, that's how far I got :/
>
> The other thing I was thinking of is trying to detect overlap through
> the page-tables themselves, but we have a distinct lack of storage
> there.

We might just use some state in the pmd, there's still 2 _pt_pad_[12] in
struct page to 'use'. So we could come up with some tlb generation
scheme that would detect conflict.

\
 
 \ /
  Last update: 2019-05-13 11:13    [W:0.065 / U:3.820 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site