lkml.org 
[lkml]   [2020]   [Jan]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] sched/fair: Load balance aggressively for SCHED_IDLE CPUs
On Tue, 7 Jan 2020 16:55:18 +0530
Viresh Kumar <viresh.kumar@linaro.org> wrote:

> Hi Steven,
>
> On 02-01-20, 12:29, Steven Rostedt wrote:
> > On Tue, 24 Dec 2019 10:43:30 +0530
> > Viresh Kumar <viresh.kumar@linaro.org> wrote:
> >
> > > This is tested on ARM64 Hikey620 platform (octa-core) with the help of
> > > rt-app and it is verified, using kernel traces, that the newly
> > > SCHED_IDLE CPU does load balancing shortly after it becomes SCHED_IDLE
> > > and pulls tasks from other busy CPUs.
> >
> > Can you post the actual steps you used to test this and show the before
> > and after results? Then others can reproduce what you have shown and
> > even run other tests to see if this change has any other side effects.
>
> I have attached the json file I used on my octa-core hikey platform along with
> before/after kernelshark screenshots with this email.
>
> The json file does following:
>
> - it first creates 8 always running sched_idle tasks (thread-idle-X) and let
> them spread on all 8 CPUs.
>
> - it then creates 8 cfs tasks (thread-cfs-X) that run 50ms every 100ms which
> will also spread on the 8 cores.
>
> one of these threads (thread-cfs2-7) run only 1ms instead of 50ms once every 6
> periods. During this 6th period, a 9th task (thread-cfs3-8) wakes up.
>
> - The 9th cfs task (thread-cfs3-8) is timed in a way that it wakes up only
> during the 6th period of thread-cfs2-7. This thread runs 50ms every 600ms.
>
> Most of the time, thread-cfs3-8 doesn't wakeup on the cpu with the short
> thread-cfs2-7 task so after 1ms, we have 1 cpu running only sched_idle task
> and on another CPU 2 CFS tasks compete during 100ms.
>
> - the 9th task has to wait a full sched slice (12ms) before its 1st schedule
> - the 2 cfs tasks that compete for the same CPU, need 100ms to complete
> instead of 50ms (51ms in this case).
>
> The before.jpg image shows what happened before this patch was applied. The
> thread-cfs3-8 doesn't migrate to CPU4 which was only running sched-idle stuff at
> the 6th period of thread-cfs2-7. The migration happened though when the
> thread-cfs3-8 woke up next time (after 600 ms), this isn't shown in the picture.
>
> The after.jpg image shows what happened after this patch was applied. On the
> very first instance when thread-cfs3-8 gets a chance to run, the load balancer
> starts balancing the CPUs. It migrates lot of sched-idle tasks to CPU7 first
> (CPU7 was running thread-cfs2-7 then), and finally migrates the thread-cfs3-8
> task to CPU7.
>
> I have done some markings on the jpg files as well to show the tasks and
> migration points.
>
> Please lemme know in case someone needs further clarification. Thanks.
>

Thanks. I think I was able to reproduce it. Speaking of, I'd
recommend that you download and install the latest KernelShark
(https://www.kernelshark.org), as it looks like you're still using the
pre-1.0 version (which is now deprecated). One nice feature of the
latest is that it has json session files that you can pass to others.
If you install KernelShark 1.0, then you can do:

1) download http://rostedt.org/private/sched_idle_ks_data.tar.bz2
2) extract it:
$ cd /tmp
$ wget http://rostedt.org/private/sched_idle_ks_data.tar.bz2
$ tar xvf sched_idle_ks_data.tar.bz2
$ cd sched_idle_ks_data
3) Open up each of the data files and it will bring you right to
where you want to be.
$ kernelshark -s sched_idle_ks-before.json &
$ kernelshark -s sched_idle_ks-after.json &

And you can see if I duplicated what you explained ;-)

-- Steve

\
 
 \ /
  Last update: 2020-01-07 18:33    [W:0.065 / U:0.288 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site