lkml.org 
[lkml]   [2022]   [Oct]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH RFC v2 rcu 2/8] srcu: Create an srcu_read_lock_nmisafe() and srcu_read_unlock_nmisafe()
    On Sun, Oct 02, 2022 at 05:55:16PM +0200, Frederic Weisbecker wrote:
    > On Thu, Sep 29, 2022 at 11:07:25AM -0700, Paul E. McKenney wrote:
    > > @@ -1090,7 +1121,7 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp,
    > > int ss_state;
    > >
    > > check_init_srcu_struct(ssp);
    > > - idx = srcu_read_lock(ssp);
    > > + idx = __srcu_read_lock_nmisafe(ssp);
    >
    > Why do we need to force the atomic based version here (even if CONFIG_NEED_SRCU_NMI_SAFE=y)?

    In kernels built with CONFIG_NEED_SRCU_NMI_SAFE=n, we of course need it.
    As you say, in kernels built with CONFIG_NEED_SRCU_NMI_SAFE=y, we don't.
    But it doesn't hurt to always use __srcu_read_lock_nmisafe() here, and
    this is nowhere near a fastpath, so there is little benefit to using
    __srcu_read_lock() when it is safe to do so.

    In addition, note that it is possible that a given srcu_struct structure's
    first grace period is executed before its first reader. In that
    case, we have no way of knowing which of __srcu_read_lock_nmisafe()
    or __srcu_read_lock() to choose.

    So this code always does it the slow(ish) safe way.

    > > ss_state = smp_load_acquire(&ssp->srcu_size_state);
    > > if (ss_state < SRCU_SIZE_WAIT_CALL)
    > > sdp = per_cpu_ptr(ssp->sda, 0);
    > > @@ -1123,7 +1154,7 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp,
    > > srcu_funnel_gp_start(ssp, sdp, s, do_norm);
    > > else if (needexp)
    > > srcu_funnel_exp_start(ssp, sdp_mynode, s);
    > > - srcu_read_unlock(ssp, idx);
    > > + __srcu_read_unlock_nmisafe(ssp, idx);
    > > return s;
    > > }
    > >
    > > @@ -1427,13 +1458,13 @@ void srcu_barrier(struct srcu_struct *ssp)
    > > /* Initial count prevents reaching zero until all CBs are posted. */
    > > atomic_set(&ssp->srcu_barrier_cpu_cnt, 1);
    > >
    > > - idx = srcu_read_lock(ssp);
    > > + idx = __srcu_read_lock_nmisafe(ssp);
    >
    > And same here?

    Yes, same here. ;-)

    Thanx, Paul

    > Thanks.
    >
    > > if (smp_load_acquire(&ssp->srcu_size_state) < SRCU_SIZE_WAIT_BARRIER)
    > > srcu_barrier_one_cpu(ssp, per_cpu_ptr(ssp->sda, 0));
    > > else
    > > for_each_possible_cpu(cpu)
    > > srcu_barrier_one_cpu(ssp, per_cpu_ptr(ssp->sda, cpu));
    > > - srcu_read_unlock(ssp, idx);
    > > + __srcu_read_unlock_nmisafe(ssp, idx);
    > >
    > > /* Remove the initial count, at which point reaching zero can happen. */
    > > if (atomic_dec_and_test(&ssp->srcu_barrier_cpu_cnt))
    > > --
    > > 2.31.1.189.g2e36527f23
    > >

    \
     
     \ /
      Last update: 2022-10-02 18:10    [W:7.753 / U:0.060 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site