lkml.org 
[lkml]   [2013]   [Feb]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [RFC PATCH 1/6] kernel: implement queue spinlock API
    From
    Date
    On Thu, 2013-02-07 at 14:34 -0800, Paul E. McKenney wrote:
    > On Tue, Jan 22, 2013 at 03:13:30PM -0800, Michel Lespinasse wrote:
    > > Introduce queue spinlocks, to be used in situations where it is desired
    > > to have good throughput even under the occasional high-contention situation.
    > >
    > > This initial implementation is based on the classic MCS spinlock,
    > > because I think this represents the nicest API we can hope for in a
    > > fast queue spinlock algorithm. The MCS spinlock has known limitations
    > > in that it performs very well under high contention, but is not as
    > > good as the ticket spinlock under low contention. I will address these
    > > limitations in a later patch, which will propose an alternative,
    > > higher performance implementation using (mostly) the same API.
    > >
    > > Sample use case acquiring mystruct->lock:
    > >
    > > struct q_spinlock_node node;
    > >
    > > q_spin_lock(&mystruct->lock, &node);
    > > ...
    > > q_spin_unlock(&mystruct->lock, &node);
    >
    > It is possible to keep the normal API for MCS locks by having the lock
    > holder remember the parameter in the lock word itself. While spinning,
    > the node is on the stack, is not needed once the lock is acquired.
    > The pointer to the next node in the queue -is- needed, but this can be
    > stored in the lock word.
    >
    > I believe that John Stultz worked on something like this some years back,
    > so added him to CC.
    >

    Hmm...

    This could easily break if the spin_lock() is embedded in a function,
    and the unlock done in another one.

    (storage for the node would disappear at function epilogue )





    \
     
     \ /
      Last update: 2013-02-08 00:42    [W:4.149 / U:0.008 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site