lkml.org 
[lkml]   [2013]   [Jan]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [RFC PATCH 4/6] kernel: faster queue spinlock implementation
    On 01/22/2013 06:13 PM, Michel Lespinasse wrote:

    > Because of these limitations, the MCS queue spinlock implementation does
    > not always compare favorably to ticket spinlocks under moderate contention.
    >
    > This alternative queue spinlock implementation has some nice properties:
    >
    > - One single atomic operation (xchg) during acquire
    > - One single memory store for unlock. No busy looping either.
    > Actually, the unlock is so short that we can just inline it.
    > - Same basic API as with the MCS spinlock

    There is one thing I do not understand about these locks.

    > +static inline void
    > +q_spin_unlock(struct q_spinlock *lock, struct q_spinlock_node *node)
    > +{
    > + q_spin_unlock_mb(); /* guarantee release store semantics */
    > + ACCESS_ONCE(node->token->wait) = false;
    > + preempt_enable();
    > +}

    Here you set wait to false, in the CPU-local (on the current CPU)
    queue lock token. Afterwards, the same CPU could try to lock another
    lock, using the same token...

    > +DEFINE_PER_CPU(struct q_spinlock_token *, q_spinlock_token[2]);
    > +
    > +static inline struct q_spinlock_token *
    > +____q_spin_lock(struct q_spinlock *lock,
    > + struct q_spinlock_token **percpu_token)
    > {
    > + /*
    > + * Careful about reentrancy here - if we are interrupted and the code
    > + * running in that interrupt tries to get another queue spinlock,
    > + * it must not use the same percpu_token that we're using here.
    > + */
    > +
    > + struct q_spinlock_token *token, *prev;
    > +
    > + token = __this_cpu_read(*percpu_token);
    > + token->wait = true;
    > + prev = xchg(&lock->token, token);
    > + __this_cpu_write(*percpu_token, prev);
    > + while (ACCESS_ONCE(prev->wait))
    > cpu_relax();
    > q_spin_lock_mb(); /* guarantee acquire load semantics */
    > + return token;
    > }

    Here a CPU trying to take the lock will spin on the previous
    CPU's token.

    However, the previous CPU can immediately re-use its token.

    It looks like it might be possible for the CPU trying to
    acquire the lock to miss prev->wait being set to false, and
    continue spinning.

    If this lock type is widely used, could that lead to a deadlock?

    Is there something in your code that guarantees the scenario
    I described cannot happen, and I just missed it?


    \
     
     \ /
      Last update: 2013-01-23 23:22    [W:4.440 / U:0.028 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site