lkml.org 
[lkml]   [2022]   [Mar]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [BUG] rcu-tasks : should take care of sparse cpu masks
On Thu, Mar 31, 2022 at 03:57:36PM -0700, Eric Dumazet wrote:
> On Thu, Mar 31, 2022 at 3:54 PM Eric Dumazet <edumazet@google.com> wrote:
> >
> > On Thu, Mar 31, 2022 at 3:42 PM Paul E. McKenney <paulmck@kernel.org> wrote:
> > >
> > > On Thu, Mar 31, 2022 at 02:45:25PM -0700, Eric Dumazet wrote:
> > > > Hi Paul
> > > >
> > > > It seems you assume per cpu ptr for arbitrary indexes (< nr_cpu_ids) are valid.
> > >
> > > Gah! I knew I was forgetting something...
> > >
> > > But just to check, is this a theoretical problem or something you hit
> > > on real hardware? (For the rest of this email, I am assuming the latter.)
> >
> > Code review really...
> >
> > >
> > > > What do you think of the (untested) following patch ?
> > >
> > > One issue with this patch is that the contention could be unpredictable,
> > > or worse, vary among CPU, especially if the cpu_possible_mask was oddly
> > > distributed.
> > >
> > > So might it be better to restrict this to all on CPU 0 on the one hand
> > > and completely per-CPU on the other? (Or all on the boot CPU, in case
> > > I am forgetting some misbegotten architecture that can run without a
> > > CPU 0.)
> >
> > If I understand correctly, cblist_init_generic() could setup
> > percpu_enqueue_shift
> > to something smaller than order_base_2(nr_cpu_ids)
> >
> > Meaning that we could reach a non zero idx in (smp_processor_id() >>
> > percpu_enqueue_shift)
> >
> > So even if CPU0 is always present (I am not sure this is guaranteed,
> > but this seems reasonable),
> > we could still attempt a per_cpu_ptr(PTR, not_present_cpu), and get garbage.
>
> Also you mention CPU 0, but I do not see where cpu binding is
> performed on the kthread ?

The initial setting of ->percpu_enqueue_shift forces all in-range CPU
IDs to shift down to zero. The grace-period kthread is allowed to run
where it likes. The callback lists are protected by locking, even in
the case of local access, so this should be safe.

Or am I missing your point?

Thanx, Paul

> > > > Thanks.
> > > >
> > > > diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
> > > > index 99cf3a13954cfb17828fbbeeb884f11614a526a9..df3785be4022e903d9682dd403464aa9927aa5c2
> > > > 100644
> > > > --- a/kernel/rcu/tasks.h
> > > > +++ b/kernel/rcu/tasks.h
> > > > @@ -273,13 +273,17 @@ static void call_rcu_tasks_generic(struct
> > > > rcu_head *rhp, rcu_callback_t func,
> > > > bool needadjust = false;
> > > > bool needwake;
> > > > struct rcu_tasks_percpu *rtpcp;
> > > > + int ideal_cpu, chosen_cpu;
> > > >
> > > > rhp->next = NULL;
> > > > rhp->func = func;
> > > > local_irq_save(flags);
> > > > rcu_read_lock();
> > > > - rtpcp = per_cpu_ptr(rtp->rtpcpu,
> > > > - smp_processor_id() >>
> > > > READ_ONCE(rtp->percpu_enqueue_shift));
> > > > +
> > > > + ideal_cpu = smp_processor_id() >> READ_ONCE(rtp->percpu_enqueue_shift);
> > > > + chosen_cpu = cpumask_next(ideal_cpu - 1, cpu_online_mask);
> > > > +
> > > > + rtpcp = per_cpu_ptr(rtp->rtpcpu, chosen_cpu);
> > > > if (!raw_spin_trylock_rcu_node(rtpcp)) { // irqs already disabled.
> > > > raw_spin_lock_rcu_node(rtpcp); // irqs already disabled.
> > > > j = jiffies;

\
 
 \ /
  Last update: 2022-04-01 01:13    [W:0.102 / U:0.700 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site