lkml.org 
[lkml]   [2020]   [Apr]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [Patch v2] rcu: simplify the calculation of rcu_state.ncpus
On Sun, Apr 19, 2020 at 06:41:37PM -0700, Paul E. McKenney wrote:
>On Sun, Apr 19, 2020 at 09:57:15PM +0000, Wei Yang wrote:
>> There is only 1 bit set in mask, which means the difference between
>> oldmask and the new one would be at the position where the bit is set in
>> mask.
>>
>> Based on this knowledge, rcu_state.ncpus could be calculated by checking
>> whether mask is already set in rnp->expmaskinitnext.
>>
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>
>Queued, thank you!
>
>I updated the commit log as shown below, so please let me know if I
>messed something up.
>

Looks pretty good.

> Thanx, Paul
>
>------------------------------------------------------------------------
>
>commit 2ff1b8268456b1a476f8b79672c87d32d4f59024
>Author: Wei Yang <richard.weiyang@gmail.com>
>Date: Sun Apr 19 21:57:15 2020 +0000
>
> rcu: Simplify the calculation of rcu_state.ncpus
>
> There is only 1 bit set in mask, which means that the only difference
> between oldmask and the new one will be at the position where the bit is
> set in mask. This commit therefore updates rcu_state.ncpus by checking
> whether the bit in mask is already set in rnp->expmaskinitnext.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
>
>diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
>index f288477..6d39485 100644
>--- a/kernel/rcu/tree.c
>+++ b/kernel/rcu/tree.c
>@@ -3732,10 +3732,9 @@ void rcu_cpu_starting(unsigned int cpu)
> {
> unsigned long flags;
> unsigned long mask;
>- int nbits;
>- unsigned long oldmask;
> struct rcu_data *rdp;
> struct rcu_node *rnp;
>+ bool newcpu;
>
> if (per_cpu(rcu_cpu_started, cpu))
> return;
>@@ -3747,12 +3746,10 @@ void rcu_cpu_starting(unsigned int cpu)
> mask = rdp->grpmask;
> raw_spin_lock_irqsave_rcu_node(rnp, flags);
> WRITE_ONCE(rnp->qsmaskinitnext, rnp->qsmaskinitnext | mask);
>- oldmask = rnp->expmaskinitnext;
>+ newcpu = !(rnp->expmaskinitnext & mask);
> rnp->expmaskinitnext |= mask;
>- oldmask ^= rnp->expmaskinitnext;
>- nbits = bitmap_weight(&oldmask, BITS_PER_LONG);
> /* Allow lockless access for expedited grace periods. */
>- smp_store_release(&rcu_state.ncpus, rcu_state.ncpus + nbits); /* ^^^ */
>+ smp_store_release(&rcu_state.ncpus, rcu_state.ncpus + newcpu); /* ^^^ */
> ASSERT_EXCLUSIVE_WRITER(rcu_state.ncpus);
> rcu_gpnum_ovf(rnp, rdp); /* Offline-induced counter wrap? */
> rdp->rcu_onl_gp_seq = READ_ONCE(rcu_state.gp_seq);

--
Wei Yang
Help you, Help me

\
 
 \ /
  Last update: 2020-04-20 23:29    [W:0.052 / U:1.296 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site