lkml.org 
[lkml]   [2018]   [Jul]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [tip:x86/mm] x86/mm/tlb: Make lazy TLB mode lazier
    From
    Date


    > On Jul 17, 2018, at 7:33 AM, Peter Zijlstra <peterz@infradead.org> wrote:
    >
    > On Tue, Jul 17, 2018 at 02:35:08AM -0700, tip-bot for Rik van Riel wrote:
    >> @@ -242,17 +244,40 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
    >> next->context.ctx_id);
    >>
    >> /*
    >> + * Even in lazy TLB mode, the CPU should stay set in the
    >> + * mm_cpumask. The TLB shootdown code can figure out from
    >> + * from cpu_tlbstate.is_lazy whether or not to send an IPI.
    >> */
    >> if (WARN_ON_ONCE(real_prev != &init_mm &&
    >> !cpumask_test_cpu(cpu, mm_cpumask(next))))
    >> cpumask_set_cpu(cpu, mm_cpumask(next));
    >>
    >> + /*
    >> + * If the CPU is not in lazy TLB mode, we are just switching
    >> + * from one thread in a process to another thread in the same
    >> + * process. No TLB flush required.
    >> + */
    >> + if (!was_lazy)
    >> + return;
    >> +
    >> + /*
    >> + * Read the tlb_gen to check whether a flush is needed.
    >> + * If the TLB is up to date, just use it.
    >> + * The barrier synchronizes with the tlb_gen increment in
    >> + * the TLB shootdown code.
    >> + */
    >> + smp_mb();
    >
    > What exactly is this smp_mb() ordering? The above comment is
    > insufficient. Is it the cpumask_set_cpu() vs the atomic64_read() ?
    >
    > If so, should this not be smp_mb__after_atomic() (iow a NO-OP on x86)
    >
    > If it is not so, please fix the comment to explain things.
    >
    >
    The tlb flush code first increments mm->context.tlb_gen, and then sends
    shootdown IPIs to CPUs that have this mm loaded and are not in lazy
    TLB mode.

    At context switch time, we have to ensure that we check the tlb_gen after
    we load the old is_lazy state.

    Maybe something like this?

    /*
    * Read the tlb_gen to check whether a flush is needed.
    * If the TLB is up to date, just use it.
    * The TLB shootdown code first increments tlb_gen, and then
    * sends IPIs to CPUs that have this CPU loaded and are not
    * in lazy TLB mode. The barrier ensures we handle
    * cpu_tlbstate.is_lazy before tlb_gen, keeping this code
    * synchronized with the TLB flush code.
    */


    \
     
     \ /
      Last update: 2018-07-18 17:33    [W:2.360 / U:0.012 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site