Messages in this thread | | | Date | Wed, 2 Oct 2019 17:17:19 +0200 | From | Sebastian Andrzej Siewior <> | Subject | Re: [PATCH] percpu-refcount: Use normal instead of RCU-sched" |
| |
On 2019-10-02 08:08:52 [-0700], Paul E. McKenney wrote: > On Wed, Oct 02, 2019 at 01:22:53PM +0200, Sebastian Andrzej Siewior wrote: > > This is a revert of commit > > a4244454df129 ("percpu-refcount: use RCU-sched insted of normal RCU") > > > > which claims the only reason for using RCU-sched is > > "rcu_read_[un]lock() … are slightly more expensive than preempt_disable/enable()" > > > > and > > "As the RCU critical sections are extremely short, using sched-RCU > > shouldn't have any latency implications." > > > > The problem with using RCU-sched here is that it disables preemption and > > the callback must not acquire any sleeping locks like spinlock_t on > > PREEMPT_RT which is the case with some of the users. > > Looks good in general, but changing to RCU-preempt does not change the > fact that the callbacks execute with bh disabled. There is a newish > queue_rcu_work() that invokes a workqueue handler after a grace period. > > Or am I missing your point here?
That is fine, no the RCU callback. The problem is that percpu_ref_put_many() as of now does:
rcu_read_lock_sched(): /* aka preempt_disable(); */ if (__ref_is_percpu(ref, &percpu_count)) this_cpu_sub(*percpu_count, nr); else if (unlikely(atomic_long_sub_and_test(nr, &ref->count))) ref->release(ref);
and then the callback invoked via ref->release() acquires a spinlock_t with disabled preemption. > Thanx, Paul
Sebastian
| |