lkml.org 
[lkml]   [2013]   [Jun]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH RFC ticketlock] Auto-queued ticketlock
From
Date
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
> Breaking up locks is better than implementing high-contention locks, but
> if we must have high-contention locks, why not make them automatically
> switch between light-weight ticket locks at low contention and queued
> locks at high contention?
>
> This commit therefore allows ticket locks to automatically switch between
> pure ticketlock and queued-lock operation as needed. If too many CPUs
> are spinning on a given ticket lock, a queue structure will be allocated
> and the lock will switch to queued-lock operation. When the lock becomes
> free, it will switch back into ticketlock operation. The low-order bit
> of the head counter is used to indicate that the lock is in queued mode,
> which forces an unconditional mismatch between the head and tail counters.
> This approach means that the common-case code path under conditions of
> low contention is very nearly that of a plain ticket lock.
>
> A fixed number of queueing structures is statically allocated in an
> array. The ticket-lock address is used to hash into an initial element,
> but if that element is already in use, it moves to the next element. If
> the entire array is already in use, continue to spin in ticket mode.
>
> This has been only lightly tested in the kernel, though a userspace
> implementation has survived substantial testing.
>
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
>

This looks a great idea ;)

> +
> +static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
> +{
> + __ticket_t head = 2;
> +
> + head = xadd(&lock->tickets.head, 2);

head = xadd(&lock->tickets.head, head);

> + if (head & 0x1)
> + tkt_q_do_wake(lock);
> +}
> +#endif /* #else #ifndef CONFIG_TICKET_LOCK_QUEUED */

> + */
> +void tkt_q_do_wake(arch_spinlock_t *asp)
> +{
> + struct tkt_q_head *tqhp;
> + struct tkt_q *tqp;
> +
> + /* If the queue is still being set up, wait for it. */
> + while ((tqhp = tkt_q_find_head(asp)) == NULL)
> + cpu_relax();
> +
> + for (;;) {
> +
> + /* Find the first queue element. */
> + tqp = ACCESS_ONCE(tqhp->spin);
> + if (tqp != NULL)
> + break; /* Element exists, hand off lock. */
> + if (tkt_q_try_unqueue(asp, tqhp))
> + return; /* No element, successfully removed queue. */
> + cpu_relax();
> + }
> + if (ACCESS_ONCE(tqhp->head_tkt) != -1)
> + ACCESS_ONCE(tqhp->head_tkt) = -1;
> + smp_mb(); /* Order pointer fetch and assignment against handoff. */
> + ACCESS_ONCE(tqp->cpu) = -1;
> +}

EXPORT_SYMBOL(tkt_q_do_wake) ?

Hmm, unfortunately I lack time this week to fully read the patch !





\
 
 \ /
  Last update: 2013-06-11 01:01    [W:0.285 / U:0.128 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site