Messages in this thread | | | Date | Thu, 2 Sep 2021 13:02:01 +0800 | From | Boqun Feng <> | Subject | Re: [RFC] locking: rwbase: Take care of ordering guarantee for fastpath reader |
| |
On Wed, Sep 01, 2021 at 01:22:42PM -0700, Davidlohr Bueso wrote: > On Wed, 01 Sep 2021, Boqun Feng wrote: > > diff --git a/kernel/locking/rwbase_rt.c b/kernel/locking/rwbase_rt.c > > index 4ba15088e640..a1886fd8bde6 100644 > > --- a/kernel/locking/rwbase_rt.c > > +++ b/kernel/locking/rwbase_rt.c > > @@ -41,6 +41,12 @@ > > * The risk of writer starvation is there, but the pathological use cases > > * which trigger it are not necessarily the typical RT workloads. > > * > > + * Fast-path orderings: > > + * The lock/unlock of readers can run in fast paths: lock and unlock are only > > + * atomic ops, and there is no inner lock to provide ACQUIRE and RELEASE > > + * semantics of rwbase_rt. Atomic ops then should be stronger than _acquire() > > + * and _release() to provide necessary ordering guarantee. > > Perhaps the following instead? >
Thanks.
> + * Ordering guarantees: As with any locking primitive, (load)-ACQUIRE and > + * (store)-RELEASE semantics are guaranteed for lock and unlock operations, > + * respectively; such that nothing leaks out of the critical region. When > + * writers are involved this is provided through the rtmutex. However, for > + * reader fast-paths, the atomics provide at least such guarantees. >
However, this is a bit inaccurate, yes, writers always acquire the lock (->readers) in the critical sections of ->wait_lock. But if readers run the fast-paths, the atomics of the writers have to provide the ordering, because we can rely on rtmutex orderings only if both sides run in slow-paths.
> Also, I think you could remove most of the comments wrt _acquire or _release > in the fastpath for each ->readers atomic op, unless it isn't obvious. > > > + * > > * Common code shared between RT rw_semaphore and rwlock > > */ > > > > @@ -53,6 +59,7 @@ static __always_inline int rwbase_read_trylock(struct rwbase_rt *rwb) > > * set. > > */ > > for (r = atomic_read(&rwb->readers); r < 0;) { > > Unrelated, but we probably wanna get rid of these abusing for-loops throughout. >
Agreed, let me see what I can do.
> > + /* Fully-ordered if cmpxchg() succeeds, provides ACQUIRE */ > > if (likely(atomic_try_cmpxchg(&rwb->readers, &r, r + 1))) > > As Waiman suggested, this can be _acquire() - albeit we're only missing > an L->L for acquire semantics upon returning, per the control dependency > already guaranteeing L->S. That way we would loop with _relaxed(). >
_acquire() is fine, I think. But probably a separate patch. We should be careful when relaxing the ordering, and perhaps, with some performance numbers showing the benefits.
> > return 1; > > } > > @@ -162,6 +169,8 @@ static __always_inline void rwbase_read_unlock(struct rwbase_rt *rwb, > > /* > > * rwb->readers can only hit 0 when a writer is waiting for the > > * active readers to leave the critical section. > > + * > > + * dec_and_test() is fully ordered, provides RELEASE. > > */ > > if (unlikely(atomic_dec_and_test(&rwb->readers))) > > __rwbase_read_unlock(rwb, state); > > @@ -172,7 +181,11 @@ static inline void __rwbase_write_unlock(struct rwbase_rt *rwb, int bias, > > { > > struct rt_mutex_base *rtm = &rwb->rtmutex; > > > > - atomic_add(READER_BIAS - bias, &rwb->readers); > > + /* > > + * _release() is needed in case that reader is in fast path, pairing > > + * with atomic_try_cmpxchg() in rwbase_read_trylock(), provides RELEASE > > + */ > > + (void)atomic_add_return_release(READER_BIAS - bias, &rwb->readers); > > Hmmm while defined, there are no users atomic_add_return_release (yet?). I think
There is a usage of atomic_sub_return_release() in queued_spin_unlock() ;-)
> this is because the following is preferred when the return value is not really > wanted, but only the Rmw ordering it provides: > > + smp_mb__before_atomic(); /* provide RELEASE semantics */ > atomic_add(READER_BIAS - bias, &rwb->readers); > raw_spin_unlock_irqrestore(&rtm->wait_lock, flags); > rwbase_rtmutex_unlock(rtm); >
smp_mb__before_atomic() + atomic will be a smp_mb() + atomic on weakly ordered archs (e.g. ARM64 and PowerPC), while atomic_*_return_release() will be a release atomic operation (e.g. ldxr/stxlr on ARM64), the latter is considered more cheap.
Regards, Boqun
> > raw_spin_unlock_irqrestore(&rtm->wait_lock, flags); > > rwbase_rtmutex_unlock(rtm); > > } > > Thanks, > Davidlohr
| |