lkml.org 
[lkml]   [2018]   [Sep]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.18 189/235] sched/fair: Fix util_avg of new tasks for asymmetric systems
    Date
    4.18-stable review patch.  If anyone has any objections, please let me know.

    ------------------

    From: Quentin Perret <quentin.perret@arm.com>

    [ Upstream commit 8fe5c5a937d0f4e84221631833a2718afde52285 ]

    When a new task wakes-up for the first time, its initial utilization
    is set to half of the spare capacity of its CPU. The current
    implementation of post_init_entity_util_avg() uses SCHED_CAPACITY_SCALE
    directly as a capacity reference. As a result, on a big.LITTLE system, a
    new task waking up on an idle little CPU will be given ~512 of util_avg,
    even if the CPU's capacity is significantly less than that.

    Fix this by computing the spare capacity with arch_scale_cpu_capacity().

    Signed-off-by: Quentin Perret <quentin.perret@arm.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: dietmar.eggemann@arm.com
    Cc: morten.rasmussen@arm.com
    Cc: patrick.bellasi@arm.com
    Link: http://lkml.kernel.org/r/20180612112215.25448-1-quentin.perret@arm.com
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    ---
    kernel/sched/fair.c | 10 ++++++----
    1 file changed, 6 insertions(+), 4 deletions(-)

    --- a/kernel/sched/fair.c
    +++ b/kernel/sched/fair.c
    @@ -735,11 +735,12 @@ static void attach_entity_cfs_rq(struct
    * To solve this problem, we also cap the util_avg of successive tasks to
    * only 1/2 of the left utilization budget:
    *
    - * util_avg_cap = (1024 - cfs_rq->avg.util_avg) / 2^n
    + * util_avg_cap = (cpu_scale - cfs_rq->avg.util_avg) / 2^n
    *
    - * where n denotes the nth task.
    + * where n denotes the nth task and cpu_scale the CPU capacity.
    *
    - * For example, a simplest series from the beginning would be like:
    + * For example, for a CPU with 1024 of capacity, a simplest series from
    + * the beginning would be like:
    *
    * task util_avg: 512, 256, 128, 64, 32, 16, 8, ...
    * cfs_rq util_avg: 512, 768, 896, 960, 992, 1008, 1016, ...
    @@ -751,7 +752,8 @@ void post_init_entity_util_avg(struct sc
    {
    struct cfs_rq *cfs_rq = cfs_rq_of(se);
    struct sched_avg *sa = &se->avg;
    - long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2;
    + long cpu_scale = arch_scale_cpu_capacity(NULL, cpu_of(rq_of(cfs_rq)));
    + long cap = (long)(cpu_scale - cfs_rq->avg.util_avg) / 2;

    if (cap > 0) {
    if (cfs_rq->avg.util_avg != 0) {

    \
     
     \ /
      Last update: 2018-09-24 14:42    [W:3.654 / U:0.112 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site