lkml.org 
[lkml]   [2020]   [May]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.6 125/126] sched/fair: Fix reordering of enqueue/dequeue_task_fair()
    Date
    From: Vincent Guittot <vincent.guittot@linaro.org>

    [ Upstream commit 5ab297bab984310267734dfbcc8104566658ebef ]

    Even when a cgroup is throttled, the group se of a child cgroup can still
    be enqueued and its gse->on_rq stays true. When a task is enqueued on such
    child, we still have to update the load_avg and increase
    h_nr_running of the throttled cfs. Nevertheless, the 1st
    for_each_sched_entity() loop is skipped because of gse->on_rq == true and the
    2nd loop because the cfs is throttled whereas we have to update both
    load_avg with the old h_nr_running and increase h_nr_running in such case.

    The same sequence can happen during dequeue when se moves to parent before
    breaking in the 1st loop.

    Note that the update of load_avg will effectively happen only once in order
    to sync up to the throttled time. Next call for updating load_avg will stop
    early because the clock stays unchanged.

    Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Fixes: 6d4d22468dae ("sched/fair: Reorder enqueue/dequeue_task_fair path")
    Link: https://lkml.kernel.org/r/20200306084208.12583-1-vincent.guittot@linaro.org
    Signed-off-by: Sasha Levin <sashal@kernel.org>
    ---
    kernel/sched/fair.c | 17 +++++++++--------
    1 file changed, 9 insertions(+), 8 deletions(-)

    diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
    index a486bf3d5078..7cd86641b44b 100644
    --- a/kernel/sched/fair.c
    +++ b/kernel/sched/fair.c
    @@ -5289,15 +5289,15 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
    for_each_sched_entity(se) {
    cfs_rq = cfs_rq_of(se);

    - /* end evaluation on encountering a throttled cfs_rq */
    - if (cfs_rq_throttled(cfs_rq))
    - goto enqueue_throttle;
    -
    update_load_avg(cfs_rq, se, UPDATE_TG);
    update_cfs_group(se);

    cfs_rq->h_nr_running++;
    cfs_rq->idle_h_nr_running += idle_h_nr_running;
    +
    + /* end evaluation on encountering a throttled cfs_rq */
    + if (cfs_rq_throttled(cfs_rq))
    + goto enqueue_throttle;
    }

    enqueue_throttle:
    @@ -5386,15 +5386,16 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
    for_each_sched_entity(se) {
    cfs_rq = cfs_rq_of(se);

    - /* end evaluation on encountering a throttled cfs_rq */
    - if (cfs_rq_throttled(cfs_rq))
    - goto dequeue_throttle;
    -
    update_load_avg(cfs_rq, se, UPDATE_TG);
    update_cfs_group(se);

    cfs_rq->h_nr_running--;
    cfs_rq->idle_h_nr_running -= idle_h_nr_running;
    +
    + /* end evaluation on encountering a throttled cfs_rq */
    + if (cfs_rq_throttled(cfs_rq))
    + goto dequeue_throttle;
    +
    }

    dequeue_throttle:
    --
    2.25.1


    \
     
     \ /
      Last update: 2020-05-26 21:17    [W:4.078 / U:0.028 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site