lkml.org 
[lkml]   [2022]   [Jul]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 7/8] sched/fair: delete superfluous set_task_rq_fair()
    Date
    set_task_rq() is used when move task across CPUs/groups to change
    its cfs_rq and parent entity, and it will call set_task_rq_fair()
    to sync blocked task load_avg just before change its cfs_rq.

    1. task migrate CPU: will detach/remove from prev cfs_rq and reset
    its sched_avg last_update_time to 0, so don't need to sync again.

    2. task migrate cgroup: will detach from prev cfs_rq and reset its
    sched_avg last_update_time to 0, so don't need to sync too.

    3. !fair task migrate CPU/cgroup: we stop load tracking for !fair task,
    reset sched_avg last_update_time to 0 when switched_from_fair(), so
    don't need it too.

    So set_task_rq_fair() is not needed anymore, this patch delete it.

    Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
    ---
    kernel/sched/fair.c | 31 -------------------------------
    kernel/sched/sched.h | 8 --------
    2 files changed, 39 deletions(-)

    diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
    index ca714eedeec5..b0bde895ba96 100644
    --- a/kernel/sched/fair.c
    +++ b/kernel/sched/fair.c
    @@ -3430,37 +3430,6 @@ static inline void update_tg_load_avg(struct cfs_rq *cfs_rq)
    }
    }

    -/*
    - * Called within set_task_rq() right before setting a task's CPU. The
    - * caller only guarantees p->pi_lock is held; no other assumptions,
    - * including the state of rq->lock, should be made.
    - */
    -void set_task_rq_fair(struct sched_entity *se,
    - struct cfs_rq *prev, struct cfs_rq *next)
    -{
    - u64 p_last_update_time;
    - u64 n_last_update_time;
    -
    - if (!sched_feat(ATTACH_AGE_LOAD))
    - return;
    -
    - /*
    - * We are supposed to update the task to "current" time, then its up to
    - * date and ready to go to new CPU/cfs_rq. But we have difficulty in
    - * getting what current time is, so simply throw away the out-of-date
    - * time. This will result in the wakee task is less decayed, but giving
    - * the wakee more load sounds not bad.
    - */
    - if (!(se->avg.last_update_time && prev))
    - return;
    -
    - p_last_update_time = cfs_rq_last_update_time(prev);
    - n_last_update_time = cfs_rq_last_update_time(next);
    -
    - __update_load_avg_blocked_se(p_last_update_time, se);
    - se->avg.last_update_time = n_last_update_time;
    -}
    -
    /*
    * When on migration a sched_entity joins/leaves the PELT hierarchy, we need to
    * propagate its contribution. The key to this propagation is the invariant
    diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
    index 19e0076e4245..a8ec7af4bd51 100644
    --- a/kernel/sched/sched.h
    +++ b/kernel/sched/sched.h
    @@ -505,13 +505,6 @@ extern int sched_group_set_shares(struct task_group *tg, unsigned long shares);

    extern int sched_group_set_idle(struct task_group *tg, long idle);

    -#ifdef CONFIG_SMP
    -extern void set_task_rq_fair(struct sched_entity *se,
    - struct cfs_rq *prev, struct cfs_rq *next);
    -#else /* !CONFIG_SMP */
    -static inline void set_task_rq_fair(struct sched_entity *se,
    - struct cfs_rq *prev, struct cfs_rq *next) { }
    -#endif /* CONFIG_SMP */
    #endif /* CONFIG_FAIR_GROUP_SCHED */

    #else /* CONFIG_CGROUP_SCHED */
    @@ -1937,7 +1930,6 @@ static inline void set_task_rq(struct task_struct *p, unsigned int cpu)
    #endif

    #ifdef CONFIG_FAIR_GROUP_SCHED
    - set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]);
    p->se.cfs_rq = tg->cfs_rq[cpu];
    p->se.parent = tg->se[cpu];
    p->se.depth = tg->se[cpu] ? tg->se[cpu]->depth + 1 : 0;
    --
    2.36.1
    \
     
     \ /
      Last update: 2022-07-09 17:16    [W:5.244 / U:0.044 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site