lkml.org 
[lkml]   [2013]   [Apr]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Preemptable Ticket Spinlock
On 04/21/2013 03:42 AM, Jiannan Ouyang wrote:
> Hello Everyone,
>
> I recently came up with a spinlock algorithm that can adapt to
> preemption, which you may be interested in.

It is overall a great and clever idea as Rik mentioned already.

The intuition is to
> downgrade a fair lock to an unfair lock automatically upon preemption,
> and preserve the fairness otherwise.

I also hope being little unfair, does not affect the original intention
of introducing ticket spinlocks too.
Some discussions were here long back in this thead,
https://lkml.org/lkml/2010/6/3/331

It is a guest side optimization,
> and can be used as a complementary technique to host side optimizations
> like co-scheduling and Pause-Loop Exiting.
>
> In my experiments, it improves VM performance by 5:32X on average, when
> running on a non paravirtual VMM, and by 7:91X when running on a VMM
> that supports a paravirtual locking interface (using a pv preemptable
> ticket spinlock), when executing a set of microbenchmarks as well as a
> realistic e-commerce benchmark.

AFAIU, the experiments are on non PLE machines and it would be worth
experimenting on PLE machines too. and also bigger machines.
(we may get some surprises there otherwise).
'll wait for your next iteration of the patches with "using lower bit"
changes.


>
> A detailed algorithm description can be found in my VEE 2013 paper,
> Preemptable Ticket Spinlocks: Improving Consolidated Performance in the
> Cloud
> Jiannan Ouyang, John R. Lange
> ouyang,jacklange@cs.pitt.edu <mailto:jacklange@cs.pitt.edu>
> University of Pittsburgh
> http://people.cs.pitt.edu/~ouyang/files/publication/preemptable_lock-ouyang-vee13.pdf
>
> The patch is based on stock Linux kernel 3.5.0, and tested on kernel
> 3.4.41 as well.
> http://www.cs.pitt.edu/~ouyang/files/preemptable_lock.tar.gz
>
> Thanks
> --Jiannan
>
> I'm not familiar with patch over email, so I just paste it below, sorry
> for the inconvenience.
> ======================
> diff --git a/arch/x86/include/asm/spinlock.h
> b/arch/x86/include/asm/spinlock.h
> index b315a33..895d3b3 100644
> --- a/arch/x86/include/asm/spinlock.h
> +++ b/arch/x86/include/asm/spinlock.h
> @@ -48,18 +48,35 @@
> * in the high part, because a wide xadd increment of the low part
> would carry
> * up and contaminate the high part.
> */
> +#define TIMEOUT_UNIT (1<<14)

This value seem to be at the higher end. But I hope you have
experimented enough to come up with this. Better again to test all these
tunables?? on PLE machines too.

> static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
> {
> register struct __raw_tickets inc = { .tail = 1 };
> + unsigned int timeout = 0;
> + __ticket_t current_head;
>
> inc = xadd(&lock->tickets, inc);
> -
> + if (likely(inc.head == inc.tail))
> + goto spin;
> +
> + timeout = TIMEOUT_UNIT * (inc.tail - inc.head);
> + do {
> + current_head = ACCESS_ONCE(lock->tickets.head);
> + if (inc.tail <= current_head) {
> + goto spin;
> + } else if (inc.head != current_head) {
> + inc.head = current_head;
> + timeout = TIMEOUT_UNIT * (inc.tail - inc.head);

Good idea indeed to base the loop on head and tail difference.. But for
virtualization I believe this "directly proportional notion" is little
tricky too.



\
 
 \ /
  Last update: 2013-04-22 08:41    [W:0.048 / U:3.240 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site