lkml.org 
[lkml]   [2013]   [Jan]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    SubjectRe: [PATCH v3 11/22] sched: consider runnable load average in effective_load
    On Sat, Jan 05, 2013 at 08:37:40AM +0000, Alex Shi wrote:
    > effective_load calculates the load change as seen from the
    > root_task_group. It needs to multiple cfs_rq's tg_runnable_contrib
    > when we turn to runnable load average balance.
    >
    > Signed-off-by: Alex Shi <alex.shi@intel.com>
    > ---
    > kernel/sched/fair.c | 11 ++++++++---
    > 1 file changed, 8 insertions(+), 3 deletions(-)
    >
    > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
    > index cab62aa..247d6a8 100644
    > --- a/kernel/sched/fair.c
    > +++ b/kernel/sched/fair.c
    > @@ -2982,7 +2982,8 @@ static void task_waking_fair(struct task_struct *p)
    >
    > #ifdef CONFIG_FAIR_GROUP_SCHED
    > /*
    > - * effective_load() calculates the load change as seen from the root_task_group
    > + * effective_load() calculates the runnable load average change as seen from
    > + * the root_task_group
    > *
    > * Adding load to a group doesn't make a group heavier, but can cause movement
    > * of group shares between cpus. Assuming the shares were perfectly aligned one
    > @@ -3030,13 +3031,17 @@ static void task_waking_fair(struct task_struct *p)
    > * Therefore the effective change in loads on CPU 0 would be 5/56 (3/8 - 2/7)
    > * times the weight of the group. The effect on CPU 1 would be -4/56 (4/8 -
    > * 4/7) times the weight of the group.
    > + *
    > + * After get effective_load of the load moving, will multiple the cpu own
    > + * cfs_rq's runnable contrib of root_task_group.
    > */
    > static long effective_load(struct task_group *tg, int cpu, long wl, long wg)
    > {
    > struct sched_entity *se = tg->se[cpu];
    >
    > if (!tg->parent) /* the trivial, non-cgroup case */
    > - return wl;
    > + return wl * tg->cfs_rq[cpu]->tg_runnable_contrib
    > + >> NICE_0_SHIFT;

    Why do we need to scale the load of the task (wl) by runnable_contrib
    when the task is in the root task group? Wouldn't the load change still
    just be wl?

    >
    > for_each_sched_entity(se) {
    > long w, W;
    > @@ -3084,7 +3089,7 @@ static long effective_load(struct task_group *tg, int cpu, long wl, long wg)
    > wg = 0;
    > }
    >
    > - return wl;
    > + return wl * tg->cfs_rq[cpu]->tg_runnable_contrib >> NICE_0_SHIFT;

    I believe that effective_load() is only used in wake_affine() to compare
    load scenarios of the same task group. Since the task group is the same
    the effective load is scaled by the same factor and should not make any
    difference?

    Also, in wake_affine() the result of effective_load() is added with
    target_load() which is load.weight of the cpu and not a tracked load
    based on runnable_avg_*/contrib?

    Finally, you have not scaled the result of effective_load() in the
    function used when FAIR_GROUP_SCHED is disabled. Should that be scaled
    too?

    Morten

    > }
    > #else
    >
    > --
    > 1.7.12
    >
    > --
    > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    > the body of a message to majordomo@vger.kernel.org
    > More majordomo info at http://vger.kernel.org/majordomo-info.html
    > Please read the FAQ at http://www.tux.org/lkml/
    >


    \
     
     \ /
      Last update: 2013-01-10 13:01    [W:4.225 / U:1.236 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site