Messages in this thread | | | From | Nadav Amit <> | Subject | Re: [PATCH v2] x86/mm/tlb: Remove flush_tlb_info from the stack | Date | Thu, 25 Apr 2019 19:42:06 +0000 |
| |
> On Apr 25, 2019, at 12:29 PM, Ingo Molnar <mingo@kernel.org> wrote: > > > * Nadav Amit <namit@vmware.com> wrote: > >> Move flush_tlb_info variables off the stack. This allows to align >> flush_tlb_info to cache-line and avoid potentially unnecessary cache >> line movements. It also allows to have a fixed virtual-to-physical >> translation of the variables, which reduces TLB misses. >> >> Use per-CPU struct for flush_tlb_mm_range() and >> flush_tlb_kernel_range(). Add debug assertions to ensure there are >> no nested TLB flushes that might overwrite the per-CPU data. For >> arch_tlbbatch_flush() use a const struct. >> >> Results when running a microbenchmarks that performs 10^6 MADV_DONTEED >> operations and touching a page, in which 3 additional threads run a >> busy-wait loop (5 runs, PTI and retpolines are turned off): >> >> base off-stack >> ---- --------- >> avg (usec/op) 1.629 1.570 (-3%) >> stddev 0.014 0.009 >> >> Cc: Peter Zijlstra <peterz@infradead.org> >> Cc: Andy Lutomirski <luto@kernel.org> >> Cc: Dave Hansen <dave.hansen@intel.com> >> Cc: Borislav Petkov <bp@alien8.de> >> Cc: Thomas Gleixner <tglx@linutronix.de> >> Signed-off-by: Nadav Amit <namit@vmware.com> >> >> --- >> >> v1->v2: >> - Initialize all flush_tlb_info fields [Andy] >> --- >> arch/x86/mm/tlb.c | 100 ++++++++++++++++++++++++++++++++++------------ >> 1 file changed, 74 insertions(+), 26 deletions(-) >> >> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c >> index 487b8474c01c..aac191eb2b90 100644 >> --- a/arch/x86/mm/tlb.c >> +++ b/arch/x86/mm/tlb.c >> @@ -634,7 +634,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f, >> this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen); >> } >> >> -static void flush_tlb_func_local(void *info, enum tlb_flush_reason reason) >> +static void flush_tlb_func_local(const void *info, enum tlb_flush_reason reason) >> { >> const struct flush_tlb_info *f = info; >> >> @@ -722,43 +722,81 @@ void native_flush_tlb_others(const struct cpumask *cpumask, >> */ >> unsigned long tlb_single_page_flush_ceiling __read_mostly = 33; >> >> +static DEFINE_PER_CPU_SHARED_ALIGNED(struct flush_tlb_info, flush_tlb_info); >> + >> +#ifdef CONFIG_DEBUG_VM >> +static DEFINE_PER_CPU(unsigned int, flush_tlb_info_idx); >> +#endif >> + >> +static inline struct flush_tlb_info *get_flush_tlb_info(struct mm_struct *mm, >> + unsigned long start, unsigned long end, >> + unsigned int stride_shift, bool freed_tables, >> + u64 new_tlb_gen) >> +{ >> + struct flush_tlb_info *info = this_cpu_ptr(&flush_tlb_info); >> + >> +#ifdef CONFIG_DEBUG_VM >> + /* >> + * Ensure that the following code is non-reentrant and flush_tlb_info >> + * is not overwritten. This means no TLB flushing is initiated by >> + * interrupt handlers and machine-check exception handlers. >> + */ >> + BUG_ON(this_cpu_inc_return(flush_tlb_info_idx) != 1); >> +#endif > > isn't this effectively VM_BUG_ON()?
Not exactly. When CONFIG_DEBUG_VM is off we get
#define VM_BUG_ON(cond) BUILD_BUG_ON_INVALID(cond)
This will cause the build to fail since flush_tlb_info_idx is not defined in when CONFIG_DEBUG_VM is off.
>> +static inline void put_flush_tlb_info(void) >> +{ >> +#ifdef CONFIG_DEBUG_VM >> + /* Complete reentrency prevention checks */ >> + barrier(); >> + this_cpu_dec(flush_tlb_info_idx); >> +#endif > > In principle this_cpu_dec() should imply a compiler barrier?
this_cpu_dec() is eventually expanded to the macro of percpu_add_op(). And the inline assembly does not have a “memory” clobber, so I don’t think so.
I will address your other comments. Thanks!
| |