lkml.org 
[lkml]   [2022]   [Oct]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [BUG?] X86 arch_tlbbatch_flush() seems to be lacking mm_tlb_flush_nested() integration
On Fri, Oct 14, 2022 at 8:51 PM Nadav Amit <nadav.amit@gmail.com> wrote:
>
> Unless I am missing something, flush_tlb_batched_pending() is would be
> called and do the flushing at this point, no?

Ahh, yes.

That seems to be doing the right thing, although looking a bit more at
it, I think it might be improved.

At least in the zap_pte_range() case, instead of doing a synchronous
TLB flush if there are pending batched flushes, it migth be better if
flush_tlb_batched_pending() would set the "need_flush_all" bit in the
mmu_gather structure.

That would possibly avoid that extra TLB flush entirely - since
*normally* fzap_page_range() will cause a TLB flush anyway.

Maybe it doesn't matter.

Linus

\
 
 \ /
  Last update: 2022-10-16 01:48    [W:0.166 / U:1.800 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site