lkml.org 
[lkml]   [2018]   [Sep]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 1/8] x86/mm/cpa: Use flush_tlb_all()
On Wed, 19 Sep 2018, Peter Zijlstra wrote:
> On Wed, Sep 19, 2018 at 10:50:17AM +0200, Peter Zijlstra wrote:
> > Instead of open-coding it..
> >
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > ---
> > arch/x86/mm/pageattr.c | 12 +-----------
> > 1 file changed, 1 insertion(+), 11 deletions(-)
> >
> > --- a/arch/x86/mm/pageattr.c
> > +++ b/arch/x86/mm/pageattr.c
> > @@ -285,16 +285,6 @@ static void cpa_flush_all(unsigned long
> > on_each_cpu(__cpa_flush_all, (void *) cache, 1);
> > }
> >
> > -static void __cpa_flush_range(void *arg)
> > -{
> > - /*
> > - * We could optimize that further and do individual per page
> > - * tlb invalidates for a low number of pages. Caveat: we must
> > - * flush the high aliases on 64bit as well.
> > - */
> > - __flush_tlb_all();
> > -}
>
> Hmm,.. so in patch #4 I do switch to flush_tlb_kernel_range(). What are
> those high aliases that comment talks about?

We have two mappings for the kernel. The 'real one' and the direct mapping
alias and for most of the operations, we have to make sure that the table
entries are identical in both maps.

The comments in that code probably need some care.

Thanks,

tglx

\
 
 \ /
  Last update: 2018-09-19 12:08    [W:0.077 / U:0.076 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site