lkml.org 
[lkml]   [2022]   [Sep]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up
Hi Prateek,
On 2022-09-29 at 22:28:38 +0530, K Prateek Nayak wrote:
> Hello Gautham and Chenyu,
>
> On 9/26/2022 8:09 PM, Gautham R. Shenoy wrote:
> > Hello Prateek,
> >
> > On Mon, Sep 26, 2022 at 11:20:16AM +0530, K Prateek Nayak wrote:[
> >
> > [..snip..]
> >
> >>> @@ -6050,7 +6063,8 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
> >>> if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
> >>> return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;
> >>>
> >>> - if (sync && cpu_rq(this_cpu)->nr_running == 1)
> >>> + if ((sync && cpu_rq(this_cpu)->nr_running == 1) ||
> >>> + is_short_task(cpu_curr(this_cpu)))
> >>
> >> This change seems to optimize for affine wakeup which benefits
> >> tasks with producer-consumer pattern but is not ideal for Stream.
> >> Currently the logic ends will do an affine wakeup even if sync
> >> flag is not set:
> >>
> >> stream-4135 [029] d..2. 353.580953: sched_waking: comm=stream pid=4129 prio=120 target_cpu=082
> >> stream-4135 [029] d..2. 353.580957: select_task_rq_fair: wake_affine_idle: Select this_cpu: sync(0) rq->nr_running(1) is_short_task(1)
> >> stream-4135 [029] d..2. 353.580960: sched_migrate_task: comm=stream pid=4129 prio=120 orig_cpu=82 dest_cpu=30
> >> <idle>-0 [030] dNh2. 353.580993: sched_wakeup: comm=stream pid=4129 prio=120 target_cpu=030
> >>
> >> I believe a consideration should be made for the sync flag when
> >> going for an affine wakeup. Also the check for short running could
> >> be at the end after checking if prev_cpu is an available_idle_cpu.
> >
> > We need to check if moving the is_short_task() to a later point after
> > checking the availability of the previous CPU solve the problem for
> > the workloads which showed regressions on AMD EPYC systems.
>
> I've done some testing with moving the condition check for short
> running task to the end of wake_affine_idle and checking if the
> length of run queue is 1 similar to what Tim suggested in the thread
> but doing it upfront in wake_affine_idle.
Thanks for the investigation. After a second thought, for will-it-scale
context_switch test case, all the tasks have SYNC flag, so I wonder if
putting the check to the end of wake_affine_idle() would make any
difference for will-it-scale test. Because will-it-scale might have
already returned this_cpu via 'if (sync && cpu_rq(this_cpu)->nr_running == 1)'
I'll do some test tomorrow on this.
> There are a few variations I've tested:
>
> v1: move the check for short running task on current CPU to end of wake_affine_idle
>
> v2: move the check for short running task on current CPU to end of wake_affine_idle
> + remove entire hunk in select_idle_cpu
>
> v3: move the check for short running task on current CPU to end of wake_affine_idle
> + check if run queue of current CPU only has 1 task
>
> v4: move the check for short running task on current CPU to end of wake_affine_idle
> + check if run queue of current CPU only has 1 task
> + remove entire hunk in select_idle_cpu
>
> Adding diff for v3 below:
> --
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 0ad8e7183bf2..dad9bfb0248d 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6074,13 +6074,15 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
> if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
> return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;
>
> - if ((sync && cpu_rq(this_cpu)->nr_running == 1) ||
> - is_short_task(cpu_curr(this_cpu)))
> + if (sync && cpu_rq(this_cpu)->nr_running == 1)
> return this_cpu;
>
> if (available_idle_cpu(prev_cpu))
> return prev_cpu;
>
> + if (cpu_rq(this_cpu)->nr_running == 1 && is_short_task(cpu_curr(this_cpu)))
> + return this_cpu;
> +
I'm also thinking of adding this check in SIS and also the ttwu_pending flag
check in SIS.
> return nr_cpumask_bits;
> }
>
> --
>
[cut]
>
> We still see a pileup with v1 and v2 but not with v3 and v4 suggesting
> that the second hunk is not the reason for the pileup but rather
> choosing the current CPU in wake_affine_idle on the basis that the
> current running task is the short running task. To prevent a pileup, we
> must only choose the current rq if the short running task is the only
> task running there.
>
OK, I see.

[cut]
>
> A point to note is Stream is more sensitive initially when tasks have not
> run for long enough where, if a kworker or another short running task
> is running on the previous CPU during wakeup, the logic will favor an
> affine wakeup as initially as scheduler might not realize Stream is a
> long running task.
Maybe we can add restriction that only after the task has run for a while
we start the short_task() check?
>
> Let me know if you would like me to gather more data on the test system
> for the modified kernels discussed above.
While waiting for Vincent's feedback, I'll refine the patch per your experiment
and modify the code in SIS per Tim's suggestion.

thanks,
Chenyu
> --
> Thanks and Regards,
> Prateek

\
 
 \ /
  Last update: 2022-09-30 19:27    [W:0.070 / U:0.696 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site