lkml.org 
[lkml]   [2022]   [Mar]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 4/4] rcu: Name internal polling flag
On Mon, Mar 21, 2022 at 07:11:07PM -0700, Paul E. McKenney wrote:
> On Wed, Mar 16, 2022 at 03:42:55PM +0100, Frederic Weisbecker wrote:
> > Give a proper self-explanatory name to the expedited grace period
> > internal polling flag.
> >
> > Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> > Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
> > Cc: Boqun Feng <boqun.feng@gmail.com>
> > Cc: Uladzislau Rezki <uladzislau.rezki@sony.com>
> > Cc: Joel Fernandes <joel@joelfernandes.org>
> > ---
> > kernel/rcu/rcu.h | 5 +++++
> > kernel/rcu/tree.c | 2 +-
> > kernel/rcu/tree_exp.h | 9 +++++----
> > 3 files changed, 11 insertions(+), 5 deletions(-)
> >
> > diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
> > index eccbdbdaa02e..8a62bb416ba4 100644
> > --- a/kernel/rcu/rcu.h
> > +++ b/kernel/rcu/rcu.h
> > @@ -30,6 +30,11 @@
> > #define RCU_GET_STATE_USE_NORMAL 0x2
> > #define RCU_GET_STATE_BAD_FOR_NORMAL (RCU_GET_STATE_FROM_EXPEDITED | RCU_GET_STATE_USE_NORMAL)
> >
> > +/*
> > + * Low-order bit definitions for polled grace-period internals.
> > + */
> > +#define RCU_EXP_SEQ_POLL_DONE 0x1
> > +
> > /*
> > * Return the counter portion of a sequence number previously returned
> > * by rcu_seq_snap() or rcu_seq_current().
> > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > index 5da381a3cbe5..b3223b365f9f 100644
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -4679,7 +4679,7 @@ static void __init rcu_init_one(void)
> > spin_lock_init(&rnp->exp_lock);
> > mutex_init(&rnp->boost_kthread_mutex);
> > raw_spin_lock_init(&rnp->exp_poll_lock);
> > - rnp->exp_seq_poll_rq = 0x1;
> > + rnp->exp_seq_poll_rq = RCU_EXP_SEQ_POLL_DONE;
> > INIT_WORK(&rnp->exp_poll_wq, sync_rcu_do_polled_gp);
> > }
> > }
> > diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
> > index c4a19c6a83cf..7ccb909d6355 100644
> > --- a/kernel/rcu/tree_exp.h
> > +++ b/kernel/rcu/tree_exp.h
> > @@ -910,14 +910,14 @@ static void sync_rcu_do_polled_gp(struct work_struct *wp)
> > unsigned long s;
> >
> > s = READ_ONCE(rnp->exp_seq_poll_rq);
> > - if (s & 0x1)
> > + if (s & RCU_EXP_SEQ_POLL_DONE)
> > return;
> > while (!sync_exp_work_done(s))
> > __synchronize_rcu_expedited(true);
>
> One additional question. If we re-read rnp->exp_seq_poll_rq on each pass
> through the loop, wouldn't we have less trouble with counter wrap?

We can indeed do that, though it won't eliminate the possibility of wrapping.

>
> Thanx, Paul
>
> > raw_spin_lock_irqsave(&rnp->exp_poll_lock, flags);
> > s = rnp->exp_seq_poll_rq;
> > - if (!(s & 0x1) && sync_exp_work_done(s))
> > - WRITE_ONCE(rnp->exp_seq_poll_rq, s | 0x1);
> > + if (!(s & RCU_EXP_SEQ_POLL_DONE) && sync_exp_work_done(s))
> > + WRITE_ONCE(rnp->exp_seq_poll_rq, s | RCU_EXP_SEQ_POLL_DONE);
> > raw_spin_unlock_irqrestore(&rnp->exp_poll_lock, flags);
> > }
> >
> > @@ -946,7 +946,8 @@ unsigned long start_poll_synchronize_rcu_expedited(void)
> > rnp = rdp->mynode;
> > if (rcu_init_invoked())
> > raw_spin_lock_irqsave(&rnp->exp_poll_lock, flags);
> > - if ((rnp->exp_seq_poll_rq & 0x1) || ULONG_CMP_LT(rnp->exp_seq_poll_rq, s)) {
> > + if ((rnp->exp_seq_poll_rq & RCU_EXP_SEQ_POLL_DONE) ||
> > + ULONG_CMP_LT(rnp->exp_seq_poll_rq, s)) {
> > WRITE_ONCE(rnp->exp_seq_poll_rq, s);
> > if (rcu_init_invoked())
> > queue_work(rcu_gp_wq, &rnp->exp_poll_wq);
> > --
> > 2.25.1
> >

\
 
 \ /
  Last update: 2022-03-22 11:32    [W:0.155 / U:0.508 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site