lkml.org 
[lkml]   [2018]   [Apr]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 2/2] sched/fair: Avoid calling sync_entity_load_avg() unnecessarily
From
Date
Hi Viresh,

On 04/26/2018 12:30 PM, Viresh Kumar wrote:
> Call sync_entity_load_avg() directly from find_idlest_cpu() instead of
> select_task_rq_fair(), as that's where we need to use task's utilization
> value. And call sync_entity_load_avg() only after making sure sched
> domain spans over one of the allowed CPUs for the task.
>
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>

The patch looks correct to me but we want to have the waking task synced
against its previous rq also for EAS, i.e. for
find_energy_efficient_cpu() which will sit next to find_idlest_cpu().

https://marc.info/?l=linux-kernel&m=152302907327168&w=2

The comment on top of the if condition would have to be changed though.

I would suggest we leave the call to sync_entity_load_avg() in the slow
path of strf() so that we're not forced to call it in
find_energy_efficient_cpu().

> ---
> kernel/sched/fair.c | 16 +++++++---------
> 1 file changed, 7 insertions(+), 9 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 84fc74ddbd4b..5b1b4f91f132 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6199,6 +6199,13 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
> if (!cpumask_intersects(sched_domain_span(sd), &p->cpus_allowed))
> return prev_cpu;
>
> + /*
> + * We need task's util for capacity_spare_wake, sync it up to prev_cpu's
> + * last_update_time.
> + */
> + if (!(sd_flag & SD_BALANCE_FORK))
> + sync_entity_load_avg(&p->se);
> +
> while (sd) {
> struct sched_group *group;
> struct sched_domain *tmp;
> @@ -6651,15 +6658,6 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f
>
> if (unlikely(sd)) {
> /* Slow path */
> -
> - /*
> - * We're going to need the task's util for capacity_spare_wake
> - * in find_idlest_group. Sync it up to prev_cpu's
> - * last_update_time.
> - */
> - if (!(sd_flag & SD_BALANCE_FORK))
> - sync_entity_load_avg(&p->se);
> -
> new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag);
> } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
> /* Fast path */
>

\
 
 \ /
  Last update: 2018-04-27 10:31    [W:0.196 / U:0.060 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site