lkml.org 
[lkml]   [2020]   [Jul]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [RFC PATCH 4/7] x86: use exit_lazy_tlb rather than membarrier_mm_sync_core_before_usermode
    ----- On Jul 17, 2020, at 1:44 PM, Alan Stern stern@rowland.harvard.edu wrote:

    > On Fri, Jul 17, 2020 at 12:22:49PM -0400, Mathieu Desnoyers wrote:
    >> ----- On Jul 17, 2020, at 12:11 PM, Alan Stern stern@rowland.harvard.edu wrote:
    >>
    >> >> > I agree with Nick: A memory barrier is needed somewhere between the
    >> >> > assignment at 6 and the return to user mode at 8. Otherwise you end up
    >> >> > with the Store Buffer pattern having a memory barrier on only one side,
    >> >> > and it is well known that this arrangement does not guarantee any
    >> >> > ordering.
    >> >>
    >> >> Yes, I see this now. I'm still trying to wrap my head around why the memory
    >> >> barrier at the end of membarrier() needs to be paired with a scheduler
    >> >> barrier though.
    >> >
    >> > The memory barrier at the end of membarrier() on CPU0 is necessary in
    >> > order to enforce the guarantee that any writes occurring on CPU1 before
    >> > the membarrier() is executed will be visible to any code executing on
    >> > CPU0 after the membarrier(). Ignoring the kthread issue, we can have:
    >> >
    >> > CPU0 CPU1
    >> > x = 1
    >> > barrier()
    >> > y = 1
    >> > r2 = y
    >> > membarrier():
    >> > a: smp_mb()
    >> > b: send IPI IPI-induced mb
    >> > c: smp_mb()
    >> > r1 = x
    >> >
    >> > The writes to x and y are unordered by the hardware, so it's possible to
    >> > have r2 = 1 even though the write to x doesn't execute until b. If the
    >> > memory barrier at c is omitted then "r1 = x" can be reordered before b
    >> > (although not before a), so we get r1 = 0. This violates the guarantee
    >> > that membarrier() is supposed to provide.
    >> >
    >> > The timing of the memory barrier at c has to ensure that it executes
    >> > after the IPI-induced memory barrier on CPU1. If it happened before
    >> > then we could still end up with r1 = 0. That's why the pairing matters.
    >> >
    >> > I hope this helps your head get properly wrapped. :-)
    >>
    >> It does help a bit! ;-)
    >>
    >> This explains this part of the comment near the smp_mb at the end of membarrier:
    >>
    >> * Memory barrier on the caller thread _after_ we finished
    >> * waiting for the last IPI. [...]
    >>
    >> However, it does not explain why it needs to be paired with a barrier in the
    >> scheduler, clearly for the case where the IPI is skipped. I wonder whether this
    >> part
    >> of the comment is factually correct:
    >>
    >> * [...] Matches memory barriers around rq->curr modification in scheduler.
    >
    > The reasoning is pretty much the same as above:
    >
    > CPU0 CPU1
    > x = 1
    > barrier()
    > y = 1
    > r2 = y
    > membarrier():
    > a: smp_mb()
    > switch to kthread (includes mb)
    > b: read rq->curr == kthread
    > switch to user (includes mb)
    > c: smp_mb()
    > r1 = x
    >
    > Once again, it is possible that x = 1 doesn't become visible to CPU0
    > until shortly before b. But if c is omitted then "r1 = x" can be
    > reordered before b (to any time after a), so we can have r1 = 0.
    >
    > Here the timing requirement is that c executes after the first memory
    > barrier on CPU1 -- which is one of the ones around the rq->curr
    > modification. (In fact, in this scenario CPU1's switch back to the user
    > process is irrelevant.)

    That indeed covers the last scenario I was wondering about. Thanks Alan!

    Mathieu

    >
    > Alan Stern

    --
    Mathieu Desnoyers
    EfficiOS Inc.
    http://www.efficios.com

    \
     
     \ /
      Last update: 2020-07-17 19:53    [W:2.661 / U:0.808 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site