lkml.org 
[lkml]   [2021]   [Oct]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/2] sched/fair: Skip update_blocked_averages if we are defering load balance
On Mon, Oct 04, 2021 at 07:14:51PM +0200, Vincent Guittot wrote:
> In newidle_balance(), the scheduler skips load balance to the new idle cpu
> when the 1st sd of this_rq is:
>
> this_rq->avg_idle < sd->max_newidle_lb_cost
>
> Doing a costly call to update_blocked_averages() will not be useful and
> simply adds overhead when this condition is true.
>
> Check the condition early in newidle_balance() to skip
> update_blocked_averages() when possible.
>
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> ---
> kernel/sched/fair.c | 9 ++++++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 1f78b2e3b71c..1294b78503d9 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -10841,17 +10841,20 @@ static int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
> */
> rq_unpin_lock(this_rq, rf);
>
> + rcu_read_lock();
> + sd = rcu_dereference_check_sched_domain(this_rq->sd);
> +
> if (this_rq->avg_idle < sysctl_sched_migration_cost ||
> - !READ_ONCE(this_rq->rd->overload)) {
> + !READ_ONCE(this_rq->rd->overload) ||
> + (sd && this_rq->avg_idle < sd->max_newidle_lb_cost)) {

set cino=(0:0, please.

Also, people have, in the past, tried to get rid of the first clause
here, perhaps this can replace it instead of augment it?

>
> - rcu_read_lock();
> - sd = rcu_dereference_check_sched_domain(this_rq->sd);
> if (sd)
> update_next_balance(sd, &next_balance);
> rcu_read_unlock();
>
> goto out;
> }
> + rcu_read_unlock();

There's another rcu_read_lock section right below this, at the very
least we can merge them.

Also, IIRC we're running all this with premption disabled, and since
rcu-sched got folded into rcu, all that rcu_read_*lock() stuff isn't
strictly required anymore.

(we're full circle there, back in the day RCU implied RCU-sched and the
scheduler relied on preempt-disable for lots of this stuff, then Paul
split them, and I spend a fair amount of time adding all this
rcu_read_*lock() crud, and now he's merge them again, and it can go
again).

Except of course, I think we need to make rcu_dereference_check happy
first :/

\
 
 \ /
  Last update: 2021-10-05 22:51    [W:0.088 / U:0.308 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site