lkml.org 
[lkml]   [2012]   [Dec]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
Subjectflush_tlb_page() avoidance in balancenuma pull
From

I think we need to be careful, going forward, with these
flush_tlb_page() removals in the initial commits.

On those cpus, each set_pte_at() call (and therefore indirectly
via pte_clear()) queue up a per-cpu flush entry.

Then, the various flush_tlb_*() calls don't flush a single entry
but rather run the queue of pending TLB flushes.

Therefore every single set_pte_at() call must have a subsequent
flush_tlb_*() otherwise we'll return to userspace with stale
entries in the per-cpu TLB flush queues.

It seems to be the case that we turn out to be OK here, because
__set_pte_at() on sparc64 as currently implemented does not queue up a
TLB flush in the per-cpu queue if the PTE did not have the valid bit.

And the default pte_accessible() returns true unconditionally.

In fact is shows that we can probably can implement pte_accessible()
to mirror the test done in __set_pte_at() and thus we'd get the
optimization too on sparc64.

But we really need to be careful about this, these kinds of bugs are
hard to track down.

Thanks.


\
 
 \ /
  Last update: 2012-12-18 04:01    [W:0.025 / U:0.316 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site