Messages in this thread Patch in this message |  | | From | Andrea Parri <> | Subject | [PATCH v3 2/3] arm64: Remove smp_mb() from arch_spin_is_locked() | Date | Sun, 1 Apr 2018 18:41:51 +0200 |
| |
Commit 38b850a73034f ("arm64: spinlock: order spin_{is_locked,unlock_wait} against local locks") added an smp_mb() to arch_spin_is_locked(), in order "to ensure that the lock value is always loaded after any other locks have been taken by the current CPU", and reported one example (the "insane case" in ipc/sem.c) relying on such guarantee.
It is however understood that spin_is_locked() is not required to provide such an ordering guarantee (a guarantee that is currently not provided by all the implementations/archs), and that callers relying on such ordering should instead insert suitable memory barriers before acting on the result of spin_is_locked().
Following a recent auditing [1] of the callers of {,raw_}spin_is_locked(), revealing that none of them are relying on the ordering guarantee anymore, this commit removes the leading smp_mb() from the primitive thus reverting 38b850a73034f.
[1] https://marc.info/?l=linux-kernel&m=151981440005264&w=2
Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> --- arch/arm64/include/asm/spinlock.h | 5 ----- 1 file changed, 5 deletions(-)
diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h index ebdae15d665de..26c5bd7d88d8d 100644 --- a/arch/arm64/include/asm/spinlock.h +++ b/arch/arm64/include/asm/spinlock.h @@ -122,11 +122,6 @@ static inline int arch_spin_value_unlocked(arch_spinlock_t lock) static inline int arch_spin_is_locked(arch_spinlock_t *lock) { - /* - * Ensure prior spin_lock operations to other locks have completed - * on this CPU before we test whether "lock" is locked. - */ - smp_mb(); /* ^^^ */ return !arch_spin_value_unlocked(READ_ONCE(*lock)); } -- 2.7.4
|  |