lkml.org 
[lkml]   [2012]   [Apr]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH tip/urgent] rcu: Permit call_rcu() from CPU_DYING notifiers
As of commit 29494be7, RCU adopts callbacks from the dying CPU in its
CPU_DYING notifier, which means that any callbacks posted by later
CPU_DYING notifiers are ignored until the CPU comes back online.
A WARN_ON_ONCE() was added to __call_rcu() by commit e5601400 to check
for this condition. Although this condition did not trigger (at least
as far as know) during -next testing, it did recently trigger in mainline
(https://lkml.org/lkml/2012/4/2/34).

This commit therefore causes RCU's CPU_DEAD notifier to adopt any
callbacks that were posted by CPU_DYING notifiers and removes the
WARN_ON_ONCE() from __call_rcu(). A more targeted warning for callback
posting from offline CPUs will be added as a separate commit.

Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Tested-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>

rcutree.c | 31 ++++++++++++++++++++++++++++++-
1 file changed, 30 insertions(+), 1 deletion(-)

diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index 1050d6d..4c927e6 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -1406,11 +1406,41 @@ static void rcu_cleanup_dying_cpu(struct rcu_state *rsp)
static void rcu_cleanup_dead_cpu(int cpu, struct rcu_state *rsp)
{
unsigned long flags;
+ int i;
unsigned long mask;
int need_report = 0;
struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu);
struct rcu_node *rnp = rdp->mynode; /* Outgoing CPU's rnp. */

+ /* If a CPU_DYING notifier has enqueued callbacks, adopt them. */
+ if (rdp->nxtlist != NULL) {
+ struct rcu_data *receive_rdp;
+
+ local_irq_save(flags);
+ receive_rdp = per_cpu_ptr(rsp->rda, smp_processor_id());
+
+ /* Adjust the counts. */
+ receive_rdp->qlen_lazy += rdp->qlen_lazy;
+ receive_rdp->qlen += rdp->qlen;
+ rdp->qlen_lazy = 0;
+ rdp->qlen = 0;
+
+ /*
+ * Adopt all callbacks. The outgoing CPU was in no shape
+ * to advance them, so make them all go through a full
+ * grace period.
+ */
+ *receive_rdp->nxttail[RCU_NEXT_TAIL] = rdp->nxtlist;
+ receive_rdp->nxttail[RCU_NEXT_TAIL] =
+ rdp->nxttail[RCU_NEXT_TAIL];
+ local_irq_restore(flags);
+
+ /* Initialize the outgoing CPU's callback list. */
+ rdp->nxtlist = NULL;
+ for (i = 0; i < RCU_NEXT_SIZE; i++)
+ rdp->nxttail[i] = &rdp->nxtlist;
+ }
+
/* Adjust any no-longer-needed kthreads. */
rcu_stop_cpu_kthread(cpu);
rcu_node_kthread_setaffinity(rnp, -1);
@@ -1820,7 +1850,6 @@ __call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu),
* a quiescent state betweentimes.
*/
local_irq_save(flags);
- WARN_ON_ONCE(cpu_is_offline(smp_processor_id()));
rdp = this_cpu_ptr(rsp->rda);

/* Add the callback to our list. */


\
 
 \ /
  Last update: 2012-04-06 00:29    [W:0.050 / U:0.228 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site