lkml.org 
[lkml]   [2013]   [Nov]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v5 2/4] MCS Lock: optimizations and extra comments
From
Date
On Tue, 2013-11-19 at 11:13 -0800, Paul E. McKenney wrote:
> On Fri, Nov 08, 2013 at 11:52:05AM -0800, Tim Chen wrote:
> > From: Jason Low <jason.low2@hp.com>
> >
> > Remove unnecessary operation and make the cmpxchg(lock, node, NULL) == node
> > check in mcs_spin_unlock() likely() as it is likely that a race did not occur
> > most of the time.
> >
> > Also add in more comments describing how the local node is used in MCS locks.
> >
> > Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com>
> > Signed-off-by: Jason Low <jason.low2@hp.com>
> > ---
> > include/linux/mcs_spinlock.h | 13 +++++++++++--
> > 1 files changed, 11 insertions(+), 2 deletions(-)
> >
> > diff --git a/include/linux/mcs_spinlock.h b/include/linux/mcs_spinlock.h
> > index b5de3b0..96f14299 100644
> > --- a/include/linux/mcs_spinlock.h
> > +++ b/include/linux/mcs_spinlock.h
> > @@ -18,6 +18,12 @@ struct mcs_spinlock {
> > };
> >
> > /*
> > + * In order to acquire the lock, the caller should declare a local node and
> > + * pass a reference of the node to this function in addition to the lock.
> > + * If the lock has already been acquired, then this will proceed to spin
> > + * on this node->locked until the previous lock holder sets the node->locked
> > + * in mcs_spin_unlock().
> > + *
> > * We don't inline mcs_spin_lock() so that perf can correctly account for the
> > * time spent in this lock function.
> > */
> > @@ -33,7 +39,6 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node)
> > prev = xchg(lock, node);
> > if (likely(prev == NULL)) {
> > /* Lock acquired */
> > - node->locked = 1;
>
> Agreed, no one looks at this field in this case, so no need to initialize
> it, unless for debug purposes.
>
> > return;
> > }
> > ACCESS_ONCE(prev->next) = node;
> > @@ -43,6 +48,10 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node)
> > arch_mutex_cpu_relax();
> > }
> >
> > +/*
> > + * Releases the lock. The caller should pass in the corresponding node that
> > + * was used to acquire the lock.
> > + */
> > static void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *node)
> > {
> > struct mcs_spinlock *next = ACCESS_ONCE(node->next);
> > @@ -51,7 +60,7 @@ static void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *nod
> > /*
> > * Release the lock by setting it to NULL
> > */
> > - if (cmpxchg(lock, node, NULL) == node)
> > + if (likely(cmpxchg(lock, node, NULL) == node))
>
> Agreed here as well. Takes a narrow race to hit this.
>
> So, did your testing exercise this path? If the answer is "yes", and
> if the issues that I called out in patch #1 are resolved:

I haven't instrumented the code to check the hit rate of this path. But
the slow path probably will only get hit in some cases under
heavy contention.


>
> Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
>
> > return;
> > /* Wait until the next pointer is set */
> > while (!(next = ACCESS_ONCE(node->next)))
> > --
> > 1.7.4.4
> >
> >
> >
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/




\
 
 \ /
  Last update: 2013-11-19 21:01    [W:0.556 / U:0.220 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site