lkml.org 
[lkml]   [2019]   [Jul]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] locking/rwsem: use read_acquire in read_slowpath exit when queue is empty
On Tue, Jul 16, 2019 at 12:53:14PM -0400, Waiman Long wrote:
> On 7/16/19 12:04 PM, Jan Stancek wrote:

> > Suspected problem here is that last *_acquire on down_read() side
> > happens before write side issues *_release:
> > 1. writer: has the lock
> > 2. reader: down_read() issues *read_acquire on entry
> > 3. writer: mm->vmacache_seqnum++; downgrades lock (*fetch_add_release)
> > 4. reader: __rwsem_down_read_failed_common() finds it can take lock and returns
> > 5. reader: observes stale mm->vmacache_seqnum
> >
> > I can reproduce the problem by running LTP mtest06 in a loop and building
> > kernel (-j $NCPUS) in parallel. It does reproduce since v4.20 up to v5.2
> > on arm64 HPE Apollo 70 (224 CPUs, 256GB RAM, 2 nodes). It triggers reliably
> > within ~hour. Patched kernel ran fine for 5+ hours with clean dmesg.
> > Tests were done against v5.2, since commit cf69482d62d9 ("locking/rwsem:
> > Enable readers spinning on writer") makes it much harder to reproduce.

> > Fixes: 4b486b535c33 ("locking/rwsem: Exit read lock slowpath if queue empty & no writer")
> > Signed-off-by: Jan Stancek <jstancek@redhat.com>
> > Cc: Waiman Long <longman@redhat.com>
> > Cc: Davidlohr Bueso <dbueso@suse.de>
> > Cc: Will Deacon <will@kernel.org>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Ingo Molnar <mingo@redhat.com>
> > ---
> > kernel/locking/rwsem.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
> > index 37524a47f002..757b198d7a5b 100644
> > --- a/kernel/locking/rwsem.c
> > +++ b/kernel/locking/rwsem.c
> > @@ -1030,7 +1030,7 @@ static inline bool rwsem_reader_phase_trylock(struct rw_semaphore *sem,
> > * exit the slowpath and return immediately as its
> > * RWSEM_READER_BIAS has already been set in the count.
> > */
> > - if (adjustment && !(atomic_long_read(&sem->count) &
> > + if (adjustment && !(atomic_long_read_acquire(&sem->count) &
> > (RWSEM_WRITER_MASK | RWSEM_FLAG_HANDOFF))) {
> > raw_spin_unlock_irq(&sem->wait_lock);
> > rwsem_set_reader_owned(sem);
>
> The chance of taking this path is not that high. So instead of
> increasing the cost of the test by adding an acquire barrier, how about
> just adding smp_mb__after_spinlock() before spin_unlock_irq(). This
> should have the same effect of making sure that no stale data will be
> used in the read-lock critical section.

That's actually more expensive on something like ARM64 I expect.

The far cheaper alternative is smp_acquire__after_ctrl_dep(), however in
general Will seems to prefer using load-acquire over separate barriers,
and for x86 it doesn't matter anyway. For PowerPC these two are a wash,
both end up with LWSYNC (over SYNC for your alternative).


\
 
 \ /
  Last update: 2019-07-16 20:58    [W:0.241 / U:0.140 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site