Messages in this thread Patch in this message | | | Date | Tue, 31 Oct 2023 09:53:08 +0100 | From | Peter Zijlstra <> | Subject | Re: [PATCH] sched: Don't call any kfree*() API in do_set_cpus_allowed() |
| |
On Mon, Oct 30, 2023 at 08:14:18PM -0400, Waiman Long wrote: > Commit 851a723e45d1 ("sched: Always clear user_cpus_ptr in > do_set_cpus_allowed()") added a kfree() call to free any user > provided affinity mask, if present. It was changed later to use > kfree_rcu() in commit 9a5418bc48ba ("sched/core: Use kfree_rcu() > in do_set_cpus_allowed()") to avoid a circular locking dependency > problem. > > It turns out that even kfree_rcu() isn't safe for avoiding > circular locking problem. As reported by kernel test robot, > the following circular locking dependency still exists: > > &rdp->nocb_lock --> rcu_node_0 --> &rq->__lock > > So no kfree*() API can be used in do_set_cpus_allowed(). To prevent > memory leakage, the unused user provided affinity mask is now saved in a > lockless list to be reused later by subsequent sched_setaffinity() calls. > > Without kfree_rcu(), the internal cpumask_rcuhead union can be removed > too as a lockless list entry only holds a single pointer. > > Fixes: 851a723e45d1 ("sched: Always clear user_cpus_ptr in do_set_cpus_allowed()")
Bah, or we fix RCU... Paul, how insane is the below?
--- kernel/rcu/tree.c | 31 +++++++++++++++++++++---------- 1 file changed, 21 insertions(+), 10 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index cb1caefa8bd0..4b8e26a028ee 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -754,15 +754,20 @@ static int dyntick_save_progress_counter(struct rcu_data *rdp) } /* - * Return true if the specified CPU has passed through a quiescent - * state by virtue of being in or having passed through an dynticks - * idle state since the last call to dyntick_save_progress_counter() - * for this same CPU, or by virtue of having been offline. + * Returns positive if the specified CPU has passed through a quiescent state + * by virtue of being in or having passed through an dynticks idle state since + * the last call to dyntick_save_progress_counter() for this same CPU, or by + * virtue of having been offline. + * + * Returns negative if the specified CPU needs a force resched. + * + * Returns zero otherwise. */ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp) { - unsigned long jtsq; struct rcu_node *rnp = rdp->mynode; + unsigned long jtsq; + int ret = 0; /* * If the CPU passed through or entered a dynticks idle phase with @@ -847,8 +852,8 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp) (time_after(jiffies, READ_ONCE(rdp->last_fqs_resched) + jtsq * 3) || rcu_state.cbovld)) { WRITE_ONCE(rdp->rcu_urgent_qs, true); - resched_cpu(rdp->cpu); WRITE_ONCE(rdp->last_fqs_resched, jiffies); + ret = -1; } /* @@ -891,7 +896,7 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp) } } - return 0; + return ret; } /* Trace-event wrapper function for trace_rcu_future_grace_period. */ @@ -2255,11 +2260,11 @@ void rcu_sched_clock_irq(int user) */ static void force_qs_rnp(int (*f)(struct rcu_data *rdp)) { - int cpu; + unsigned long mask, rsmask = 0; unsigned long flags; - unsigned long mask; struct rcu_data *rdp; struct rcu_node *rnp; + int cpu, ret; rcu_state.cbovld = rcu_state.cbovldnext; rcu_state.cbovldnext = false; @@ -2284,10 +2289,13 @@ static void force_qs_rnp(int (*f)(struct rcu_data *rdp)) } for_each_leaf_node_cpu_mask(rnp, cpu, rnp->qsmask) { rdp = per_cpu_ptr(&rcu_data, cpu); - if (f(rdp)) { + ret = f(rdp); + if (ret > 0) { mask |= rdp->grpmask; rcu_disable_urgency_upon_qs(rdp); } + if (ret < 0) + rsmask |= 1UL << (cpu - rnp->grplo); } if (mask != 0) { /* Idle/offline CPUs, report (releases rnp->lock). */ @@ -2296,6 +2304,9 @@ static void force_qs_rnp(int (*f)(struct rcu_data *rdp)) /* Nothing to do here, so just drop the lock. */ raw_spin_unlock_irqrestore_rcu_node(rnp, flags); } + + for_each_leaf_node_cpu_mask(rnp, cpu, rsmask) + resched_cpu(cpu); } }
| |