lkml.org 
[lkml]   [2020]   [May]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2 3/6] sched/deadline: Add dl_bw_capacity()
On 06/05/20 17:09, Dietmar Eggemann wrote:
> On 06/05/2020 14:37, Juri Lelli wrote:
> > On 06/05/20 12:54, Dietmar Eggemann wrote:
> >> On 27/04/2020 10:37, Dietmar Eggemann wrote:
>
> [...]
>
> >> There is an issue w/ excl. cpusets and cpuset.sched_load_balance=0. The
> >> latter is needed to demonstrate the problem since DL task affinity can't
> >> be altered.
> >>
> >> A CPU in such a cpuset has its rq attached to def_root_domain which does
> >> not have its 'sum_cpu_capacity' properly set.
> >
> > Hummm, but if sched_load_balance is disabled it means that we've now got
> > a subset of CPUs which (from a DL AC pow) are partitioned. So, I'd tend
>
> Yes, the CPUs of the cpuset w/ cpuset.sched_load_balance=0 (cpuset B in
> the example).
>
> > to say that we actually want to check new tasks bw requirement against
> > the available bandwidth of the particular CPU they happen to be running
> > (and will continue to run) when setscheduler is called.
>
> By 'available bandwidth of the particular CPU' you refer to
> '\Sum_{cpu_rq(i)->rd->span} CPU capacity', right?

No. I was referring to the single CPU capacity. The capacity of the CPU
where a task is running when setscheduler is called for it (and DL AC
performed). See below, maybe more clear why I wondered about this case..

> This is what this fix tries to achieve. Regardless whether cpu_rq(i)->rd
> is a 'real' rd or the def_root_domain, dl_bw_capacity() will now always
> return '\Sum_{cpu_rq(i)->rd->span} CPU capacity'
>
> > If then load balance is enabled again, AC check we did above should
> > still be valid for all tasks admitted in the meantime, no?
>
> Example (w/ this fix) on Juno [L b b L L L], capacity_orig_of(L)=446 :
>
> mkdir /sys/fs/cgroup/cpuset/A
> echo 0 > /sys/fs/cgroup/cpuset/A/cpuset.mems
> echo 1 > /sys/fs/cgroup/cpuset/A/cpuset.cpu_exclusive
> echo 0-2 > /sys/fs/cgroup/cpuset/A/cpuset.cpus
>
> mkdir /sys/fs/cgroup/cpuset/B
> echo 0 > /sys/fs/cgroup/cpuset/B/cpuset.mems
> echo 1 > /sys/fs/cgroup/cpuset/B/cpuset.cpu_exclusive
> echo 3-5 > /sys/fs/cgroup/cpuset/B/cpuset.cpus
>
> echo 0 > /sys/fs/cgroup/cpuset/B/cpuset.sched_load_balance
> echo 0 > /sys/fs/cgroup/cpuset/cpuset.sched_load_balance
>
> echo $$ > /sys/fs/cgroup/cpuset/B/tasks
> chrt -d --sched-runtime 8000 --sched-period 16000 -p 0 $$
>
> ...
> [ 144.920102] __dl_bw_capacity CPU3 rd->span=3-5 return 1338
> [ 144.925607] sched_dl_overflow: [bash 1999] task_cpu(p)=3 cap=1338 cpus_ptr=3-5

So, here you are checking new task bw against 1338 which is 3*L
capacity. However, since load balance is disabled at this point for 3-5,
once admitted the task will only be able to run on CPU 3. Now, if more
tasks on CPU 3 are admitted the same way (up to 1338) I believe they
will start to experience deadline misses because only 446 will be
actually available to them until load balance is enabled below and they
are then free to migrate to CPUs 4 and 5.

Does it makes sense?

> [ 144.932841] __dl_bw_capacity CPU3 rd->span=3-5 return 1338
> ...
>
> echo 1 > /sys/fs/cgroup/cpuset/B/cpuset.sched_load_balance
>
> echo $$ > /sys/fs/cgroup/cpuset/B/tasks
> chrt -d --sched-runtime 8000 --sched-period 16000 -p 0 $$
>
> ...
> [ 254.367982] __dl_bw_capacity CPU5 rd->span=3-5 return 1338
> [ 254.373487] sched_dl_overflow: [bash 2052] task_cpu(p)=5 cap=1338 cpus_ptr=3-5
> [ 254.380721] __dl_bw_capacity CPU5 rd->span=3-5 return 1338
> ...
>
> Regardless of 'B/cpuset.sched_load_balance'
> '\Sum_{cpu_rq(i)->rd->span} CPU capacity' stays 1338 (3*446)
>
> So IMHO, DL AC check stays intact.
>

\
 
 \ /
  Last update: 2020-05-11 10:02    [W:0.076 / U:0.592 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site