lkml.org 
[lkml]   [2008]   [Jul]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Spinlocks: Factor our GENERIC_LOCKBREAK in order to avoid spin with irqs disable
Rik van Riel wrote:
> Alternatively, the guest could tell the host which vcpus
> are next in line for a ticket spinlock, or a vcpu that gets
> scheduled but is not supposed to grab the lock yet can give
> some CPU time to the vcpu that should get the lock next.
>

Those are possible, but would either 1) require hypervisor changes,
and/or 2) changes no less extensive than the ones I had to make anyway.

Thomas's proposal was to modify the scheduler to try to avoiding
preempting vcpus while they're in kernel mode. That's nice because it
requires no guest changes, and seems at least somewhat successful at
mitigating the problem. But it can't completely solve the problem, and
you end up with a bunch of heuristics in the hypervisor to decide who to
preempt.

The other point, of course, is that ticket locks are massive overkill
for the problem they're trying to solve. It's one thing to introduce an
element of fairness into spinlocks, its another to impose strict FIFO
ordering. It would be enough to make the locks "polite" by preventing a
new lock-holder from taking the lock while its under contention.
Something like:

union lock {
unsigned short word;
struct { unsigned char lock, count; };
};

spin_lock: # ebx - lock pointer
movw $0x0001, %ax # add 1 to lock, 0 to count
xaddw %ax, (%ebx) # attempt to take lock and test user count
testw %ax,%ax
jnz slow

taken: ret

# slow path
slow: lock incb 1(%ebx) # inc count

1: rep;nop
cmpb $0,(%ebx)
jnz 1b # wait for unlocked

movb $1,%al # attempt to take lock (count already increased)
xchgb %al,(%ebx)
testb %al,%al
jnz 1b

lock decb 1(%ebx) # drop count
jmp taken

spin_unlock:
movb $0,(%ebx)
ret


The uncontended fastpath is similar to the pre-ticket locks, but it
refuses to take the lock if there are other waiters, even if the lock is
not currently held. This prevents the rapid lock-unlock cycle on one
CPU from starving another CPU, which I understand was the original
problem tickets locks were trying to solve.

But it also means that all the contended spinners get the lock in
whatever order the system decides to give it to them, rather than
imposing a strict order.

> I believe the IBM PPC64 people have done some work to implement
> just that.
>

Do you have any references?

J


\
 
 \ /
  Last update: 2008-07-07 22:17    [W:1.156 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site