Messages in this thread | | | From | Vincent Guittot <> | Date | Thu, 8 Apr 2021 16:51:04 +0200 | Subject | Re: [PATCH] sched/fair: Rate limit calls to update_blocked_averages() for NOHZ |
| |
On Wed, 7 Apr 2021 at 19:19, Tim Chen <tim.c.chen@linux.intel.com> wrote: > > > > On 4/7/21 7:02 AM, Vincent Guittot wrote: > > Hi Tim, > > > > On Wed, 24 Mar 2021 at 17:05, Tim Chen <tim.c.chen@linux.intel.com> wrote: > >> > >> > >> > >> On 3/24/21 6:44 AM, Vincent Guittot wrote: > >>> Hi Tim, > >> > >>> > >>> IIUC your problem, we call update_blocked_averages() but because of: > >>> > >>> if (this_rq->avg_idle < curr_cost + sd->max_newidle_lb_cost) { > >>> update_next_balance(sd, &next_balance); > >>> break; > >>> } > >>> > >>> the for_each_domain loop stops even before running load_balance on the 1st > >>> sched domain level which means that update_blocked_averages() was called > >>> unnecessarily. > >>> > >> > >> That's right > >> > >>> And this is even more true with a small sysctl_sched_migration_cost which allows newly > >>> idle LB for very small this_rq->avg_idle. We could wonder why you set such a low value > >>> for sysctl_sched_migration_cost which is lower than the max_newidle_lb_cost of the > >>> smallest domain but that's probably because of task_hot(). > >>> > >>> if avg_idle is lower than the sd->max_newidle_lb_cost of the 1st sched_domain, we should > >>> skip spin_unlock/lock and for_each_domain() loop entirely > >>> > >>> Maybe something like below: > >>> > >> > >> The patch makes sense. I'll ask our benchmark team to queue this patch for testing. > > > > Do you have feedback from your benchmark team ? > > > > Vincent, > > Thanks for following up. I just got some data back from the benchmark team. > The performance didn't change with your patch. And the overall cpu% of update_blocked_averages > also remain at about the same level. My first thought was perhaps this update > still didn't catch all the calls to update_blocked_averages > > if (this_rq->avg_idle < sysctl_sched_migration_cost || > - !READ_ONCE(this_rq->rd->overload)) { > + !READ_ONCE(this_rq->rd->overload) || > + (sd && this_rq->avg_idle < sd->max_newidle_lb_cost)) { > > To experiment, I added one more check on the next_balance to further limit > the path to actually do idle load balance with the next_balance time. > > if (this_rq->avg_idle < sysctl_sched_migration_cost || > - !READ_ONCE(this_rq->rd->overload)) { > + time_before(jiffies, this_rq->next_balance) || > + !READ_ONCE(this_rq->rd->overload) || > + (sd && this_rq->avg_idle < sd->max_newidle_lb_cost)) { > > I was suprised to find the overall cpu% consumption of update_blocked_averages > and throughput of the benchmark still didn't change much. So I took a > peek into the profile and found the update_blocked_averages calls shifted to the idle load balancer. > The call to update_locked_averages was reduced in newidle_balance so the patch did > what we intended. But the overall rate of calls to
At least , we have removed the useless call to update_blocked_averages in newidle_balance when we will not perform any newly idle load balance
> update_blocked_averages remain roughly the same, shifting from > newidle_balance to run_rebalance_domains. > > 100.00% (ffffffff810cf070) > | > ---update_blocked_averages > | > |--95.47%--run_rebalance_domains > | __do_softirq > | | > | |--94.27%--asm_call_irq_on_stack > | | do_softirq_own_stack
The call of update_blocked_averages mainly comes from SCHED_SOFTIRQ. And as a result, not from the new path do_idle()->nohz_run_idle_balance() which has been added by this patch to defer the call to update_nohz_stats() after newlyidle_balance and before entering idle.
> | | | > | | |--93.74%--irq_exit_rcu > | | | | > | | | |--88.20%--sysvec_apic_timer_interrupt > | | | | asm_sysvec_apic_timer_interrupt > | | | | | > ... > | > | > --4.53%--newidle_balance > pick_next_task_fair > > I was expecting idle load balancer to be rate limited to 60 Hz, which
Why 60Hz ?
> should be 15 jiffies apart on the test system with CONFIG_HZ_250. > When I did a trace on a single CPU, I see that update_blocked_averages > are often called between 1 to 4 jiffies apart, which is at a much higher > rate than I expected. I haven't taken a closer look yet. But you may
2 things can trigger a SCHED_SOFTIRQ/run_rebalance_domains: - the need for an update of blocked load which should not happen more than once every 32ms which means a rate of around 30Hz - the need for a load balance of a sched_domain. The min interval for a sched_domain is its weight when the CPU is idle which is usually few jiffies
The only idea that I have for now is that we spend less time in newidle_balance which changes the dynamic of your system.
In your trace, could you check if update_blocked_averages is called during the tick ? and Is the current task idle task ?
Vincent
> have a better idea. I won't have access to the test system and workload > till probably next week. > > Thanks. > > Tim
| |