Messages in this thread | | | Date | Mon, 1 Oct 2018 22:00:28 +0200 | From | Peter Zijlstra <> | Subject | Re: [RFC][PATCH 3/3] locking/qspinlock: Optimize for x86 |
| |
On Mon, Oct 01, 2018 at 06:17:00PM +0100, Will Deacon wrote: > Hi Peter, > > Thanks for chewing up my afternoon ;)
I'll get you a beer in EDI ;-)
> On Wed, Sep 26, 2018 at 01:01:20PM +0200, Peter Zijlstra wrote:
> > /** > > + * set_pending_fetch_acquire - fetch the whole lock value and set pending and locked > > + * @lock : Pointer to queued spinlock structure > > + * Return: The previous lock value > > + * > > + * *,*,* -> *,1,* > > + */ > > +static __always_inline u32 set_pending_fetch_acquire(struct qspinlock *lock) > > +{ > > +#if defined(_Q_NO_FETCH_OR) && _Q_PENDING_BITS == 8 > > + u32 val; > > + > > + /* > > + * For the !LL/SC archs that do not have a native atomic_fetch_or > > + * but do have a native xchg (x86) we can do trickery and avoid the > > + * cmpxchg-loop based fetch-or to improve determinism. > > + * > > + * We first xchg the pending byte to set PENDING, and then issue a load > > + * for the rest of the word and mask out the pending byte to compute a > > + * 'full' value. > > + */ > > + val = xchg_relaxed(&lock->pending, 1) << _Q_PENDING_OFFSET; > > If we make this an xchg_acquire(), can we drop the smp_mb__after_atomic() > and use a plain atomic_read() to get the rest of the word?
I did consider that; but I confused myself so many times I stuck with this one. Primarily because on x86 it doesn't matter one way or the other and smp_mb() are 'easier' to reason about.
> But actually, > consider this scenario with your patch: > > 1. CPU0 sees a locked val, and is about to do your xchg_relaxed() to set > pending. > > 2. CPU1 comes in and sets pending, spins on locked > > 3. CPU2 sees a pending and locked val, and is about to enter the head of > the waitqueue (i.e. it's right before xchg_tail()). > > 4. The locked holder unlock()s, CPU1 takes the lock() and then unlock()s > it, so pending and locked are now 0. > > 5. CPU0 sets pending and reads back zeroes for the other fields > > 6. CPU0 clears pending and sets locked -- it now has the lock > > 7. CPU2 updates tail, sees it's at the head of the waitqueue and spins > for locked and pending to go clear. However, it reads a stale value > from step (4) and attempts the atomic_try_cmpxchg() to take the lock. > > 8. CPU2 will fail the cmpxchg(), but then go ahead and set locked. At this > point we're hosed, because both CPU2 and CPU0 have the lock.
Let me draw a picture of that..
CPU0 CPU1 CPU2 CPU3
0) lock trylock -> (0,0,1) 1)lock trylock /* fail */
2) lock trylock /* fail */ tas-pending -> (0,1,1) wait-locked
3) lock trylock /* fail */ tas-pending /* fail */
4) unlock -> (0,1,0) clr_pnd_set_lck -> (0,0,1) unlock -> (0,0,0)
5) tas-pending -> (0,1,0) read-val -> (0,1,0) 6) clr_pnd_set_lck -> (0,0,1) 7) xchg_tail -> (n,0,1) load_acquire <- (n,0,0) (from-4) 8) cmpxchg /* fail */ set_locked()
> Is there something I'm missing that means this can't happen? I suppose > cacheline granularity ends up giving serialisation between (4) and (7), > but I'd *much* prefer not to rely on that because it feels horribly > fragile.
Well, on x86 atomics are fully ordered, so the xchg_tail() does in fact have smp_mb() in and that should order it sufficient for that not to happen I think.
But in general, yes ick. Alternatively, making xchg_tail an ACQUIRE doesn't seem too far out..
> Another idea I was playing with was adding test_and_set_bit_acquire() > for this, because x86 has an instruction for that, right?
LOCK BTS, yes. So it can do a full 32bit RmW, but it cannot return the old value of the word, just the old bit (in CF).
I suppose you get rid of the whole mixed size thing, but you still have the whole two instruction thing.
> > + /* > > + * Ensures the tail load happens after the xchg(). > > + * > > + * lock unlock (a) > > + * xchg ---------------. > > + * (b) lock unlock +----- fetch_or > > + * load ---------------' > > + * lock unlock (c) > > + * > > I failed miserably at parsing this comment :( > > I think the main issue is that I don't understand how to read the little > diagram you've got.
Where fetch_or() is indivisible and has happens-before (a) and happens-after (c), the new thing is in fact divisible and has happens-in-between (b).
Of the happens-in-between (b), we can either get a new concurrent locker, or make progress of an extant concurrent locker because an unlock happened.
But the rest of the text might indeed be very confused. I think I wrote the bulk of that when I was in fact doing a xchg16 on locked_pending, but that's fundamentally broken. I did edit it afterwards, but that might have just made it worse.
| |