lkml.org 
[lkml]   [2018]   [Apr]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH 03/10] locking/qspinlock: Kill cmpxchg loop when claiming lock from head of queue
On Thu, Apr 05, 2018 at 05:59:00PM +0100, Will Deacon wrote:
> +
> + /* In the PV case we might already have _Q_LOCKED_VAL set */
> + if ((val & _Q_TAIL_MASK) == tail) {
> /*
> * The smp_cond_load_acquire() call above has provided the
> + * necessary acquire semantics required for locking.
> */
> old = atomic_cmpxchg_relaxed(&lock->val, val, _Q_LOCKED_VAL);
> if (old == val)
> + goto release; /* No contention */
> }

--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -464,8 +464,7 @@ void queued_spin_lock_slowpath(struct qs
* The smp_cond_load_acquire() call above has provided the
* necessary acquire semantics required for locking.
*/
- old = atomic_cmpxchg_relaxed(&lock->val, val, _Q_LOCKED_VAL);
- if (old == val)
+ if (atomic_try_cmpxchg_release(&lock->val, &val, _Q_LOCKED_VAL))
goto release; /* No contention */
}

Does that also work for you? It would generate slightly better code for
x86 (not that it would matter much on this path).
\
 
 \ /
  Last update: 2018-04-05 19:20    [W:0.585 / U:0.100 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site