lkml.org 
[lkml]   [2012]   [Oct]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 1/2] percpu-rw-semaphores: use light/heavy barriers
On 10/23, Paul E. McKenney wrote:
>
> On Tue, Oct 23, 2012 at 06:59:12PM +0200, Oleg Nesterov wrote:
> > Not really the comment, but the question...
> >
> > On 10/22, Mikulas Patocka wrote:
> > >
> > > static inline void percpu_down_read(struct percpu_rw_semaphore *p)
> > > {
> > > rcu_read_lock();
> > > @@ -24,22 +27,12 @@ static inline void percpu_down_read(stru
> > > }
> > > this_cpu_inc(*p->counters);
> > > rcu_read_unlock();
> > > + light_mb(); /* A, between read of p->locked and read of data, paired with D */
> > > }
> >
> > rcu_read_unlock() (or even preempt_enable) should have compiler barrier
> > semantics... But I agree, this adds more documentation for free.
>
> Although rcu_read_lock() does have compiler-barrier semantics if
> CONFIG_PREEMPT=y, it does not for CONFIG_PREEMPT=n. So the
> light_mb() (which appears to be barrier()) is needed in that case.

Indeed, I missed this.

> > Ignoring the current implementation, according to the documentation
> > synchronize_sched() has all rights to return immediately if there is
> > no active rcu_read_lock_sched() section. If this were possible, than
> > percpu_up_read() lacks mb.
>
> Even if there happen to be no RCU-sched read-side critical sections
> at the current instant, synchronize_sched() is required to make sure
> that everyone agrees that whatever code is executed by the caller after
> synchronize_sched() returns happens after any of the preceding RCU
> read-side critical sections.
>
> So, if we have this, with x==0 initially:
>
> Task 0 Task 1
>
> rcu_read_lock_sched();
> x = 1;
> rcu_read_unlock_sched();
> synchronize_sched();
> r1 = x;
>
> Then the value of r1 had better be one.

Yes, yes, this too. ("active rcu_read_lock_sched() section" above
was confusing, I agree).

> * Note that this guarantee implies a further memory-ordering guarantee.
> * On systems with more than one CPU, when synchronize_sched() returns,
> * each CPU is guaranteed to have executed a full memory barrier since
> * the end of its last RCU read-side critical section whose beginning
> * preceded the call to synchronize_sched(). Note that this guarantee
> * includes CPUs that are offline, idle, or executing in user mode, as
> * well as CPUs that are executing in the kernel. Furthermore, if CPU A
> * invoked synchronize_sched(), which returned to its caller on CPU B,
> * then both CPU A and CPU B are guaranteed to have executed a full memory
> * barrier during the execution of synchronize_sched().

Great!

Thanks Paul.

Oleg.



\
 
 \ /
  Last update: 2012-10-23 21:21    [W:0.302 / U:0.608 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site