lkml.org 
[lkml]   [2013]   [Apr]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: Preemptable Ticket Spinlock
From
Date
On Mon, 2013-04-22 at 16:46 -0400, Jiannan Ouyang wrote:
> On Mon, Apr 22, 2013 at 4:08 PM, Peter Zijlstra <peterz@infradead.org> wrote:
>
> >
> > I much prefer the entire series from Jeremy since it maintains the
> > ticket semantics and doesn't degrade the lock to unfair under
> > contention.
> >
> > Now I suppose there's a reason its not been merged yet and I suspect
> > its !paravirt hotpath impact which wasn't rightly justified or somesuch
> > so maybe someone can work on that or so.. dunno.
> >
> >
>
> In my paper, I comparable preemptable-lock and pv_lock on KVM from
> Raghu and Jeremy.

Which pv_lock? The current pv spinlock mess is basically the old unfair
thing. The later patch series I referred to earlier implemented a
paravirt ticket lock, that should perform much better under overcommit.

> Results show that:
> - preemptable-lock improves performance significantly without paravirt support

But completely wrecks our native spinlock implementation so that's not
going to happen of course ;-)

> - preemptable-lock can also be paravirtualized, which outperforms
> pv_lock, especially when overcommited by 3 or more

See above..

> - pv-preemptable-lock has much less performance variance compare to
> pv_lock, because it adapts to preemption within VM,
> other than using rescheduling that increase VM interference

I would say it has a _much_ worse worst case (and thus worse variance)
than the paravirt ticket implementation from Jeremy. While full
paravirt ticket lock results in vcpu scheduling it does maintain
fairness.

If you drop strict fairness you can end up in unbounded starvation
cases and those are very ugly indeed.

> It would still be very interesting to conduct more experiments to
> compare these two, to see if the fairness enforced by pv_lock is
> mandatory, and if preemptable-lock outperforms pv_lock in most cases,
> and how do they work with PLE.

Be more specific, pv_lock as currently upstream is a trainwreck mostly
done because pure ticket spinners and vcpu-preemption are even worse.



\
 
 \ /
  Last update: 2013-04-23 00:21    [W:0.068 / U:0.356 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site