lkml.org 
[lkml]   [2013]   [Jan]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH v3 11/22] sched: consider runnable load average in effective_load
On Fri, Jan 11, 2013 at 03:26:59AM +0000, Alex Shi wrote:
> On 01/10/2013 07:28 PM, Morten Rasmussen wrote:
> > On Sat, Jan 05, 2013 at 08:37:40AM +0000, Alex Shi wrote:
> >> effective_load calculates the load change as seen from the
> >> root_task_group. It needs to multiple cfs_rq's tg_runnable_contrib
> >> when we turn to runnable load average balance.
> >>
> >> Signed-off-by: Alex Shi <alex.shi@intel.com>
> >> ---
> >> kernel/sched/fair.c | 11 ++++++++---
> >> 1 file changed, 8 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> >> index cab62aa..247d6a8 100644
> >> --- a/kernel/sched/fair.c
> >> +++ b/kernel/sched/fair.c
> >> @@ -2982,7 +2982,8 @@ static void task_waking_fair(struct task_struct *p)
> >>
> >> #ifdef CONFIG_FAIR_GROUP_SCHED
> >> /*
> >> - * effective_load() calculates the load change as seen from the root_task_group
> >> + * effective_load() calculates the runnable load average change as seen from
> >> + * the root_task_group
> >> *
> >> * Adding load to a group doesn't make a group heavier, but can cause movement
> >> * of group shares between cpus. Assuming the shares were perfectly aligned one
> >> @@ -3030,13 +3031,17 @@ static void task_waking_fair(struct task_struct *p)
> >> * Therefore the effective change in loads on CPU 0 would be 5/56 (3/8 - 2/7)
> >> * times the weight of the group. The effect on CPU 1 would be -4/56 (4/8 -
> >> * 4/7) times the weight of the group.
> >> + *
> >> + * After get effective_load of the load moving, will multiple the cpu own
> >> + * cfs_rq's runnable contrib of root_task_group.
> >> */
> >> static long effective_load(struct task_group *tg, int cpu, long wl, long wg)
> >> {
> >> struct sched_entity *se = tg->se[cpu];
> >>
> >> if (!tg->parent) /* the trivial, non-cgroup case */
> >> - return wl;
> >> + return wl * tg->cfs_rq[cpu]->tg_runnable_contrib
> >> + >> NICE_0_SHIFT;
> >
> > Why do we need to scale the load of the task (wl) by runnable_contrib
> > when the task is in the root task group? Wouldn't the load change still
> > just be wl?
> >
>
> Here, wl is the load weight, runnable_contrib engaged the runnable time.

Yes, wl is the load weight of the task. But I don't understand why you
multiply it with the tg_runnable_contrib of the group you want to insert
it into. Since effective_load() is supposed to return the load change
caused by adding the task to the cpu it would make more sense if you
multiplied with the task runnable_avg_sum / runnable_avg_period of the
task in question.

Morten

> >>
> >> for_each_sched_entity(se) {
> >> long w, W;
> >> @@ -3084,7 +3089,7 @@ static long effective_load(struct task_group *tg, int cpu, long wl, long wg)
> >> wg = 0;
> >> }
> >>
> >> - return wl;
> >> + return wl * tg->cfs_rq[cpu]->tg_runnable_contrib >> NICE_0_SHIFT;
> >
> > I believe that effective_load() is only used in wake_affine() to compare
> > load scenarios of the same task group. Since the task group is the same
> > the effective load is scaled by the same factor and should not make any
> > difference?
> >
> > Also, in wake_affine() the result of effective_load() is added with
> > target_load() which is load.weight of the cpu and not a tracked load
> > based on runnable_avg_*/contrib?
> >
> > Finally, you have not scaled the result of effective_load() in the
> > function used when FAIR_GROUP_SCHED is disabled. Should that be scaled
> > too?
>
> it should be, thanks reminder.
>
> the wake up is not good for burst wakeup benchmark. I am thinking to
> rewrite this part.
>
>



\
 
 \ /
  Last update: 2013-01-14 13:43    [W:0.131 / U:0.056 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site