lkml.org 
[lkml]   [2020]   [Dec]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] sched: Prevent raising SCHED_SOFTIRQ when CPU is !active
On Tue, 15 Dec 2020 11:44:00 +0100
Anna-Maria Behnsen <anna-maria@linutronix.de> wrote:

> SCHED_SOFTIRQ is raised to trigger periodic load balancing. When CPU is not
> active, CPU should not participate in load balancing.
>
> The scheduler uses nohz.idle_cpus_mask to keep track of the CPUs which can
> do idle load balancing. When bringing a CPU up the CPU is added to the mask
> when it reaches the active state, but on teardown the CPU stays in the mask
> until it goes offline and invokes sched_cpu_dying().
>
> When SCHED_SOFTIRQ is raised on a !active CPU, there might be a pending
> softirq when stopping the tick which triggers a warning in NOHZ code. The
> SCHED_SOFTIRQ can also be raised by the scheduler tick which has the same
> issue.
>
> Therefore remove the CPU from nohz.idle_cpus_mask when it is marked
> inactive and also prevent the scheduler_tick() from raising SCHED_SOFTIRQ
> after this point.
>

Makes sense.

Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

-- Steve

> Signed-off-by: Anna-Maria Behnsen <anna-maria@linutronix.de>
> ---
> kernel/sched/core.c | 7 ++++++-
> kernel/sched/fair.c | 7 +++++--
> 2 files changed, 11 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 21b548b69455..69284dc121d3 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -7492,6 +7492,12 @@ int sched_cpu_deactivate(unsigned int cpu)
> struct rq_flags rf;
> int ret;
>
> + /*
> + * Remove CPU from nohz.idle_cpus_mask to prevent participating in
> + * load balancing when not active
> + */
> + nohz_balance_exit_idle(rq);
> +
> set_cpu_active(cpu, false);
> /*
> * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
> @@ -7598,7 +7604,6 @@ int sched_cpu_dying(unsigned int cpu)
>
> calc_load_migrate(rq);
> update_max_interval();
> - nohz_balance_exit_idle(rq);
> hrtick_clear(rq);
> return 0;
> }
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 04a3ce20da67..fd422b8eb859 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -10700,8 +10700,11 @@ static __latent_entropy void run_rebalance_domains(struct softirq_action *h)
> */
> void trigger_load_balance(struct rq *rq)
> {
> - /* Don't need to rebalance while attached to NULL domain */
> - if (unlikely(on_null_domain(rq)))
> + /*
> + * Don't need to rebalance while attached to NULL domain or
> + * runqueue CPU is not active
> + */
> + if (unlikely(on_null_domain(rq) || !cpu_active(cpu_of(rq))))
> return;
>
> if (time_after_eq(jiffies, rq->next_balance))

\
 
 \ /
  Last update: 2020-12-15 15:32    [W:0.172 / U:1.228 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site