Messages in this thread | | | Date | Mon, 10 Jun 2013 14:54:31 -0700 | From | "Paul E. McKenney" <> | Subject | Re: [PATCH RFC ticketlock] Auto-queued ticketlock |
| |
On Mon, Jun 10, 2013 at 02:35:06PM -0700, Eric Dumazet wrote: > On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote: > > Breaking up locks is better than implementing high-contention locks, but > > if we must have high-contention locks, why not make them automatically > > switch between light-weight ticket locks at low contention and queued > > locks at high contention? > > > > This commit therefore allows ticket locks to automatically switch between > > pure ticketlock and queued-lock operation as needed. If too many CPUs > > are spinning on a given ticket lock, a queue structure will be allocated > > and the lock will switch to queued-lock operation. When the lock becomes > > free, it will switch back into ticketlock operation. The low-order bit > > of the head counter is used to indicate that the lock is in queued mode, > > which forces an unconditional mismatch between the head and tail counters. > > This approach means that the common-case code path under conditions of > > low contention is very nearly that of a plain ticket lock. > > > > A fixed number of queueing structures is statically allocated in an > > array. The ticket-lock address is used to hash into an initial element, > > but if that element is already in use, it moves to the next element. If > > the entire array is already in use, continue to spin in ticket mode. > > > > This has been only lightly tested in the kernel, though a userspace > > implementation has survived substantial testing. > > > > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> > > This looks a great idea ;)
Glad you like it! Hopefully workloads like it as well. ;-)
> > + > > +static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock) > > +{ > > + __ticket_t head = 2; > > + > > + head = xadd(&lock->tickets.head, 2); > > head = xadd(&lock->tickets.head, head);
Yikes! Good catch, fixed.
> > + if (head & 0x1) > > + tkt_q_do_wake(lock); > > +} > > +#endif /* #else #ifndef CONFIG_TICKET_LOCK_QUEUED */ > > > + */ > > +void tkt_q_do_wake(arch_spinlock_t *asp) > > +{ > > + struct tkt_q_head *tqhp; > > + struct tkt_q *tqp; > > + > > + /* If the queue is still being set up, wait for it. */ > > + while ((tqhp = tkt_q_find_head(asp)) == NULL) > > + cpu_relax(); > > + > > + for (;;) { > > + > > + /* Find the first queue element. */ > > + tqp = ACCESS_ONCE(tqhp->spin); > > + if (tqp != NULL) > > + break; /* Element exists, hand off lock. */ > > + if (tkt_q_try_unqueue(asp, tqhp)) > > + return; /* No element, successfully removed queue. */ > > + cpu_relax(); > > + } > > + if (ACCESS_ONCE(tqhp->head_tkt) != -1) > > + ACCESS_ONCE(tqhp->head_tkt) = -1; > > + smp_mb(); /* Order pointer fetch and assignment against handoff. */ > > + ACCESS_ONCE(tqp->cpu) = -1; > > +} > > EXPORT_SYMBOL(tkt_q_do_wake) ?
Good point, just in case we want to use spinlocks in modules. ;-) Same for tkt_spin_pass(), I guess.
> Hmm, unfortunately I lack time this week to fully read the patch !
I suspect that there is very little danger of this patch going in this week, so you should have some additional time. ;-)
Thanx, Paul
| |