lkml.org 
[lkml]   [2015]   [Jul]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2 1/6] locking/pvqspinlock: Unconditional PV kick with _Q_SLOW_VAL
On 07/15/2015 05:10 AM, Peter Zijlstra wrote:
> On Tue, Jul 14, 2015 at 10:13:32PM -0400, Waiman Long wrote:
>> The smp_store_release() is not a full barrier. In order to avoid missed
>> wakeup, we may need to add memory barrier around locked and cpu state
>> variables adding to complexity. As the chance of spurious wakeup is very
>> low, it is easier and safer to just do an unconditional kick at unlock
>> time.
> I have the below patch. We need that rmb in there anyhow for the hash to
> work.
>
> ---
> Subject: locking/pvqspinlock: Order pv_unhash after cmpxchg on unlock slowpath
> From: Will Deacon<will.deacon@arm.com>
> Date: Mon, 13 Jul 2015 16:58:30 +0100
>
> When we unlock in __pv_queued_spin_unlock, a failed cmpxchg on the lock
> value indicates that we need to take the slow-path and unhash the
> corresponding node blocked on the lock.
>
> Since a failed cmpxchg does not provide any memory-ordering guarantees,
> it is possible that the node data could be read before the cmpxchg on
> weakly-ordered architectures and therefore return a stale value, leading
> to hash corruption and/or a BUG().
>
> This patch adds an smb_rmb() following the failed cmpxchg operation, so
> that the unhashing is ordered after the lock has been checked.
>
> Cc: Paul McKenney<paulmck@linux.vnet.ibm.com>
> Cc: Waiman Long<Waiman.Long@hp.com>
> Cc: Steve Capper<Steve.Capper@arm.com>
> Reported-by: Peter Zijlstra<peterz@infradead.org>
> Signed-off-by: Will Deacon<will.deacon@arm.com>
> [peterz: More comments]
> Signed-off-by: Peter Zijlstra (Intel)<peterz@infradead.org>
> Link: http://lkml.kernel.org/r/20150713155830.GL2632@arm.com
> ---
> kernel/locking/qspinlock_paravirt.h | 23 ++++++++++++++++++-----
> 1 file changed, 18 insertions(+), 5 deletions(-)
>
> --- a/kernel/locking/qspinlock_paravirt.h
> +++ b/kernel/locking/qspinlock_paravirt.h
> @@ -244,13 +244,17 @@ static void pv_wait_head(struct qspinloc
> if (!lp) { /* ONCE */
> lp = pv_hash(lock, pn);
> /*
> - * lp must be set before setting _Q_SLOW_VAL
> + * We must hash before setting _Q_SLOW_VAL, such that
> + * when we observe _Q_SLOW_VAL in __pv_queued_spin_unlock()
> + * we'll be sure to be able to observe our hash entry.
> *
> - * [S] lp = lock [RmW] l = l->locked = 0
> - * MB MB
> - * [S] l->locked = _Q_SLOW_VAL [L] lp
> + * [S] pn->state
> + * [S]<hash> [Rmw] l->locked == _Q_SLOW_VAL
> + * MB RMB
> + * [RmW] l->locked = _Q_SLOW_VAL [L]<unhash>
> + * [L] pn->state
> *
> - * Matches the cmpxchg() in __pv_queued_spin_unlock().
> + * Matches the smp_rmb() in __pv_queued_spin_unlock().
> */
> if (!cmpxchg(&l->locked, _Q_LOCKED_VAL, _Q_SLOW_VAL)) {
> /*
> @@ -306,6 +310,15 @@ __visible void __pv_queued_spin_unlock(s
> }
>
> /*
> + * A failed cmpxchg doesn't provide any memory-ordering guarantees,
> + * so we need a barrier to order the read of the node data in
> + * pv_unhash *after* we've read the lock being _Q_SLOW_VAL.
> + *
> + * Matches the cmpxchg() in pv_wait_head() setting _Q_SLOW_VAL.
> + */
> + smp_rmb();

According to memory_barriers.txt, cmpxchg() is a full memory barrier. It
didn't say a failed cmpxchg will lose its memory guarantee. So is the
documentation right? Or is that true for some architectures? I think it
is not true for x86.

Cheers,
Longman




\
 
 \ /
  Last update: 2015-07-16 02:41    [W:0.092 / U:0.140 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site