lkml.org 
[lkml]   [2022]   [Apr]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.17 102/146] sched/pelt: Fix attach_entity_load_avg() corner case
    Date
    From: kuyo chang <kuyo.chang@mediatek.com>

    [ Upstream commit 40f5aa4c5eaebfeaca4566217cb9c468e28ed682 ]

    The warning in cfs_rq_is_decayed() triggered:

    SCHED_WARN_ON(cfs_rq->avg.load_avg ||
    cfs_rq->avg.util_avg ||
    cfs_rq->avg.runnable_avg)

    There exists a corner case in attach_entity_load_avg() which will
    cause load_sum to be zero while load_avg will not be.

    Consider se_weight is 88761 as per the sched_prio_to_weight[] table.
    Further assume the get_pelt_divider() is 47742, this gives:
    se->avg.load_avg is 1.

    However, calculating load_sum:

    se->avg.load_sum = div_u64(se->avg.load_avg * se->avg.load_sum, se_weight(se));
    se->avg.load_sum = 1*47742/88761 = 0.

    Then enqueue_load_avg() adds this to the cfs_rq totals:

    cfs_rq->avg.load_avg += se->avg.load_avg;
    cfs_rq->avg.load_sum += se_weight(se) * se->avg.load_sum;

    Resulting in load_avg being 1 with load_sum is 0, which will trigger
    the WARN.

    Fixes: f207934fb79d ("sched/fair: Align PELT windows between cfs_rq and its se")
    Signed-off-by: kuyo chang <kuyo.chang@mediatek.com>
    [peterz: massage changelog]
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
    Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
    Link: https://lkml.kernel.org/r/20220414090229.342-1-kuyo.chang@mediatek.com
    Signed-off-by: Sasha Levin <sashal@kernel.org>
    ---
    kernel/sched/fair.c | 10 +++++-----
    1 file changed, 5 insertions(+), 5 deletions(-)

    diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
    index cddcf2f4f525..2f461f059278 100644
    --- a/kernel/sched/fair.c
    +++ b/kernel/sched/fair.c
    @@ -3776,11 +3776,11 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s

    se->avg.runnable_sum = se->avg.runnable_avg * divider;

    - se->avg.load_sum = divider;
    - if (se_weight(se)) {
    - se->avg.load_sum =
    - div_u64(se->avg.load_avg * se->avg.load_sum, se_weight(se));
    - }
    + se->avg.load_sum = se->avg.load_avg * divider;
    + if (se_weight(se) < se->avg.load_sum)
    + se->avg.load_sum = div_u64(se->avg.load_sum, se_weight(se));
    + else
    + se->avg.load_sum = 1;

    enqueue_load_avg(cfs_rq, se);
    cfs_rq->avg.util_avg += se->avg.util_avg;
    --
    2.35.1


    \
     
     \ /
      Last update: 2022-04-26 11:38    [W:4.202 / U:0.540 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site