Messages in this thread | | | Date | Wed, 1 Sep 2021 13:22:42 -0700 | From | Davidlohr Bueso <> | Subject | Re: [RFC] locking: rwbase: Take care of ordering guarantee for fastpath reader |
| |
On Wed, 01 Sep 2021, Boqun Feng wrote: >diff --git a/kernel/locking/rwbase_rt.c b/kernel/locking/rwbase_rt.c >index 4ba15088e640..a1886fd8bde6 100644 >--- a/kernel/locking/rwbase_rt.c >+++ b/kernel/locking/rwbase_rt.c >@@ -41,6 +41,12 @@ > * The risk of writer starvation is there, but the pathological use cases > * which trigger it are not necessarily the typical RT workloads. > * >+ * Fast-path orderings: >+ * The lock/unlock of readers can run in fast paths: lock and unlock are only >+ * atomic ops, and there is no inner lock to provide ACQUIRE and RELEASE >+ * semantics of rwbase_rt. Atomic ops then should be stronger than _acquire() >+ * and _release() to provide necessary ordering guarantee.
Perhaps the following instead?
+ * Ordering guarantees: As with any locking primitive, (load)-ACQUIRE and + * (store)-RELEASE semantics are guaranteed for lock and unlock operations, + * respectively; such that nothing leaks out of the critical region. When + * writers are involved this is provided through the rtmutex. However, for + * reader fast-paths, the atomics provide at least such guarantees.
Also, I think you could remove most of the comments wrt _acquire or _release in the fastpath for each ->readers atomic op, unless it isn't obvious.
>+ * > * Common code shared between RT rw_semaphore and rwlock > */ > >@@ -53,6 +59,7 @@ static __always_inline int rwbase_read_trylock(struct rwbase_rt *rwb) > * set. > */ > for (r = atomic_read(&rwb->readers); r < 0;) {
Unrelated, but we probably wanna get rid of these abusing for-loops throughout.
>+ /* Fully-ordered if cmpxchg() succeeds, provides ACQUIRE */ > if (likely(atomic_try_cmpxchg(&rwb->readers, &r, r + 1)))
As Waiman suggested, this can be _acquire() - albeit we're only missing an L->L for acquire semantics upon returning, per the control dependency already guaranteeing L->S. That way we would loop with _relaxed().
> return 1; > } >@@ -162,6 +169,8 @@ static __always_inline void rwbase_read_unlock(struct rwbase_rt *rwb, > /* > * rwb->readers can only hit 0 when a writer is waiting for the > * active readers to leave the critical section. >+ * >+ * dec_and_test() is fully ordered, provides RELEASE. > */ > if (unlikely(atomic_dec_and_test(&rwb->readers))) > __rwbase_read_unlock(rwb, state); >@@ -172,7 +181,11 @@ static inline void __rwbase_write_unlock(struct rwbase_rt *rwb, int bias, > { > struct rt_mutex_base *rtm = &rwb->rtmutex; > >- atomic_add(READER_BIAS - bias, &rwb->readers); >+ /* >+ * _release() is needed in case that reader is in fast path, pairing >+ * with atomic_try_cmpxchg() in rwbase_read_trylock(), provides RELEASE >+ */ >+ (void)atomic_add_return_release(READER_BIAS - bias, &rwb->readers);
Hmmm while defined, there are no users atomic_add_return_release (yet?). I think this is because the following is preferred when the return value is not really wanted, but only the Rmw ordering it provides:
+ smp_mb__before_atomic(); /* provide RELEASE semantics */ atomic_add(READER_BIAS - bias, &rwb->readers); raw_spin_unlock_irqrestore(&rtm->wait_lock, flags); rwbase_rtmutex_unlock(rtm);
> raw_spin_unlock_irqrestore(&rtm->wait_lock, flags); > rwbase_rtmutex_unlock(rtm); > }
Thanks, Davidlohr
| |