lkml.org 
[lkml]   [2013]   [Nov]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v5 4/4] MCS Lock: Barrier corrections
From
Date
On Tue, 2013-11-19 at 11:21 -0800, Paul E. McKenney wrote:
> On Fri, Nov 08, 2013 at 11:52:38AM -0800, Tim Chen wrote:
> > From: Waiman Long <Waiman.Long@hp.com>
> >
> > This patch corrects the way memory barriers are used in the MCS lock
> > with smp_load_acquire and smp_store_release fucnction.
> > It removes ones that are not needed.
> >
> > It uses architecture specific load-acquire and store-release
> > primitives for synchronization, if available. Generic implementations
> > are provided in case they are not defined even though they may not
> > be optimal. These generic implementation could be removed later on
> > once changes are made in all the relevant header files.
> >
> > Suggested-by: Michel Lespinasse <walken@google.com>
> > Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> > Signed-off-by: Jason Low <jason.low2@hp.com>
> > Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
>
> Please see comments below.
>
> Thanx, Paul
>
> > ---
> > kernel/locking/mcs_spinlock.c | 48 +++++++++++++++++++++++++++++++++++------
> > 1 files changed, 41 insertions(+), 7 deletions(-)
> >
> > diff --git a/kernel/locking/mcs_spinlock.c b/kernel/locking/mcs_spinlock.c
> > index b6f27f8..df5c167 100644
> > --- a/kernel/locking/mcs_spinlock.c
> > +++ b/kernel/locking/mcs_spinlock.c
> > @@ -23,6 +23,31 @@
> > #endif
> >
> > /*
> > + * Fall back to use the regular atomic operations and memory barrier if
> > + * the acquire/release versions are not defined.
> > + */
> > +#ifndef xchg_acquire
> > +# define xchg_acquire(p, v) xchg(p, v)
> > +#endif
> > +
> > +#ifndef smp_load_acquire
> > +# define smp_load_acquire(p) \
> > + ({ \
> > + typeof(*p) __v = ACCESS_ONCE(*(p)); \
> > + smp_mb(); \
> > + __v; \
> > + })
> > +#endif
> > +
> > +#ifndef smp_store_release
> > +# define smp_store_release(p, v) \
> > + do { \
> > + smp_mb(); \
> > + ACCESS_ONCE(*(p)) = v; \
> > + } while (0)
> > +#endif
> > +
> > +/*
> > * In order to acquire the lock, the caller should declare a local node and
> > * pass a reference of the node to this function in addition to the lock.
> > * If the lock has already been acquired, then this will proceed to spin
> > @@ -37,15 +62,19 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node)
> > node->locked = 0;
> > node->next = NULL;
> >
> > - prev = xchg(lock, node);
> > + /* xchg() provides a memory barrier */
> > + prev = xchg_acquire(lock, node);
>
> But if this is xchg_acquire() with only acquire semantics, it need not
> ensure that the initializations of node->locked and node->next above
> will happen before the "ACCESS_ONCE(prev->next) = node" below. This
> therefore needs to remain xchg(). Or you need an smp_store_release()
> below instead of an ACCESS_ONCE() assignment.

Good point. Will keep it as xchg.

>
> As currently written, the poor CPU doing the unlock can be fatally
> disappointed by seeing pre-initialized values of ->locked and ->next.
> This could, among other things, result in a hang where the handoff
> happens before the initialization.
>
> > if (likely(prev == NULL)) {
> > /* Lock acquired */
> > return;
> > }
> > ACCESS_ONCE(prev->next) = node;
> > - smp_wmb();
> > - /* Wait until the lock holder passes the lock down */
> > - while (!ACCESS_ONCE(node->locked))
> > + /*
> > + * Wait until the lock holder passes the lock down.
> > + * Using smp_load_acquire() provides a memory barrier that
> > + * ensures subsequent operations happen after the lock is acquired.
> > + */
> > + while (!(smp_load_acquire(&node->locked)))
> > arch_mutex_cpu_relax();
>
> OK, this smp_load_acquire() makes sense!
>
> > }
> > EXPORT_SYMBOL_GPL(mcs_spin_lock);
> > @@ -54,7 +83,7 @@ EXPORT_SYMBOL_GPL(mcs_spin_lock);
> > * Releases the lock. The caller should pass in the corresponding node that
> > * was used to acquire the lock.
> > */
> > -static void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *node)
> > +void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *node)
> > {
> > struct mcs_spinlock *next = ACCESS_ONCE(node->next);
> >
> > @@ -68,7 +97,12 @@ static void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *nod
> > while (!(next = ACCESS_ONCE(node->next)))
> > arch_mutex_cpu_relax();
> > }
> > - ACCESS_ONCE(next->locked) = 1;
> > - smp_wmb();
> > + /*
> > + * Pass lock to next waiter.
> > + * smp_store_release() provides a memory barrier to ensure
> > + * all operations in the critical section has been completed
> > + * before unlocking.
> > + */
> > + smp_store_release(&next->locked , 1);
>
> This smp_store_release() makes sense as well!
>
> Could you please get rid of the extraneous space before the comma?

Will do.

>
> > }
> > EXPORT_SYMBOL_GPL(mcs_spin_unlock);
> > --
> > 1.7.4.4
> >
> >
>




\
 
 \ /
  Last update: 2013-11-19 22:41    [W:0.202 / U:0.084 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site