lkml.org 
[lkml]   [2018]   [Nov]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v5 2/2] sched/fair: update scale invariance of PELT
From
Date
On 10/26/18 6:11 PM, Vincent Guittot wrote:
> The current implementation of load tracking invariance scales the
> contribution with current frequency and uarch performance (only for
> utilization) of the CPU. One main result of this formula is that the
> figures are capped by current capacity of CPU. Another one is that the
> load_avg is not invariant because not scaled with uarch.
>
> The util_avg of a periodic task that runs r time slots every p time slots
> varies in the range :
>
> U * (1-y^r)/(1-y^p) * y^i < Utilization < U * (1-y^r)/(1-y^p)
>
> with U is the max util_avg value = SCHED_CAPACITY_SCALE
>
> At a lower capacity, the range becomes:
>
> U * C * (1-y^r')/(1-y^p) * y^i' < Utilization < U * C * (1-y^r')/(1-y^p)
>
> with C reflecting the compute capacity ratio between current capacity and
> max capacity.
>
> so C tries to compensate changes in (1-y^r') but it can't be accurate.
>
> Instead of scaling the contribution value of PELT algo, we should scale the
> running time. The PELT signal aims to track the amount of computation of
> tasks and/or rq so it seems more correct to scale the running time to
> reflect the effective amount of computation done since the last update.
>
> In order to be fully invariant, we need to apply the same amount of
> running time and idle time whatever the current capacity. Because running
> at lower capacity implies that the task will run longer, we have to ensure
> that the same amount of idle time will be apply when system becomes idle
> and no idle time has been "stolen". But reaching the maximum utilization
> value (SCHED_CAPACITY_SCALE) means that the task is seen as an
> always-running task whatever the capacity of the CPU (even at max compute
> capacity). In this case, we can discard this "stolen" idle times which
> becomes meaningless.
>
> In order to achieve this time scaling, a new clock_pelt is created per rq.
> The increase of this clock scales with current capacity when something
> is running on rq and synchronizes with clock_task when rq is idle. With
> this mecanism, we ensure the same running and idle time whatever the
> current capacity.

Thinking about this new approach on a big.LITTLE platform:

CPU Capacities big: 1024 LITTLE: 512, performance CPUfreq governor

A 50% (runtime/period) task on a big CPU will become an always running
task on the little CPU. The utilization signal of the task and the
cfs_rq of the little CPU converges to 1024.

With contrib scaling the utilization signal of the 50% task converges to
512 on the little CPU, even it is always running on it, and so does the
one of the cfs_rq.

Two 25% tasks on a big CPU will become two 50% tasks on a little CPU.
The utilization signal of the tasks converges to 512 and the one of the
cfs_rq of the little CPU converges to 1024.

With contrib scaling the utilization signal of the 25% tasks converges
to 256 on the little CPU, even they run each 50% on it, and the one of
the cfs_rq converges to 512.

So what do we consider system-wide invariance? I thought that e.g. a 25%
task should have a utilization value of 256 no matter on which CPU it is
running?

In both cases, the little CPU is not going idle whereas the big CPU does.

\
 
 \ /
  Last update: 2018-11-02 16:37    [W:0.097 / U:0.692 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site