lkml.org 
[lkml]   [2021]   [Oct]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 03/10] rcu/nocb: Make rcu_core() callbacks acceleration preempt-safe
    Date
    From: Thomas Gleixner <tglx@linutronix.de>

    While reporting a quiescent state for a given CPU, rcu_core() takes
    advantage of the freshly loaded grace period sequence number and the
    locked rnp to accelerate the callbacks whose sequence number have been
    assigned a stale value.

    This action is only necessary when the rdp isn't offloaded, otherwise
    the NOCB kthreads already take care of the callbacks progression.

    However the check for the offloaded state is volatile because it is
    performed outside the IRQs disabled section. It's possible for the
    offloading process to preempt rcu_core() at that point on PREEMPT_RT.

    This is dangerous because rcu_core() may end up accelerating callbacks
    concurrently with NOCB kthreads without appropriate locking.

    Fix this with moving the offloaded check inside the rnp locking section.

    Reported-and-tested-by: Valentin Schneider <valentin.schneider@arm.com>
    Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
    Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Cc: Josh Triplett <josh@joshtriplett.org>
    Cc: Joel Fernandes <joel@joelfernandes.org>
    Cc: Boqun Feng <boqun.feng@gmail.com>
    Cc: Neeraj Upadhyay <neeraju@codeaurora.org>
    Cc: Uladzislau Rezki <urezki@gmail.com>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
    Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
    ---
    kernel/rcu/tree.c | 5 +++--
    1 file changed, 3 insertions(+), 2 deletions(-)

    diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
    index b236271b9022..4869a6856bf1 100644
    --- a/kernel/rcu/tree.c
    +++ b/kernel/rcu/tree.c
    @@ -2288,7 +2288,6 @@ rcu_report_qs_rdp(struct rcu_data *rdp)
    unsigned long flags;
    unsigned long mask;
    bool needwake = false;
    - const bool offloaded = rcu_rdp_is_offloaded(rdp);
    struct rcu_node *rnp;

    WARN_ON_ONCE(rdp->cpu != smp_processor_id());
    @@ -2315,8 +2314,10 @@ rcu_report_qs_rdp(struct rcu_data *rdp)
    /*
    * This GP can't end until cpu checks in, so all of our
    * callbacks can be processed during the next GP.
    + *
    + * NOCB kthreads have their own way to deal with that.
    */
    - if (!offloaded)
    + if (!rcu_rdp_is_offloaded(rdp))
    needwake = rcu_accelerate_cbs(rnp, rdp);

    rcu_disable_urgency_upon_qs(rdp);
    --
    2.25.1
    \
     
     \ /
      Last update: 2021-10-19 02:09    [W:2.414 / U:0.196 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site