lkml.org 
[lkml]   [2018]   [Jun]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 2/3] locking: Clarify requirements for smp_mb__after_spinlock()
    Date
    There are 11 interpretations of the requirements described in the header
    comment for smp_mb__after_spinlock(): one for each LKMM maintainer, and
    one currently encoded in the Cat file. Stick to the latter (until a more
    satisfactory solution is presented/agreed).

    Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Will Deacon <will.deacon@arm.com>
    Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
    ---
    include/linux/spinlock.h | 25 ++-----------------------
    1 file changed, 2 insertions(+), 23 deletions(-)

    diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
    index 1e8a464358384..6737ee2381d50 100644
    --- a/include/linux/spinlock.h
    +++ b/include/linux/spinlock.h
    @@ -114,29 +114,8 @@ do { \
    #endif /*arch_spin_is_contended*/

    /*
    - * This barrier must provide two things:
    - *
    - * - it must guarantee a STORE before the spin_lock() is ordered against a
    - * LOAD after it, see the comments at its two usage sites.
    - *
    - * - it must ensure the critical section is RCsc.
    - *
    - * The latter is important for cases where we observe values written by other
    - * CPUs in spin-loops, without barriers, while being subject to scheduling.
    - *
    - * CPU0 CPU1 CPU2
    - *
    - * for (;;) {
    - * if (READ_ONCE(X))
    - * break;
    - * }
    - * X=1
    - * <sched-out>
    - * <sched-in>
    - * r = X;
    - *
    - * without transitivity it could be that CPU1 observes X!=0 breaks the loop,
    - * we get migrated and CPU2 sees X==0.
    + * smp_mb__after_spinlock() provides a full memory barrier between po-earlier
    + * lock acquisitions and po-later memory accesses.
    *
    * Since most load-store architectures implement ACQUIRE with an smp_mb() after
    * the LL/SC loop, they need no further barriers. Similarly all our TSO
    --
    2.7.4
    \
     
     \ /
      Last update: 2018-06-28 12:44    [W:5.816 / U:0.008 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site