lkml.org 
[lkml]   [2013]   [Dec]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH 4/4] sched: bias to target cpu load to reduce task moving
On Tue, Dec 17, 2013 at 02:10:12PM +0000, Morten Rasmussen wrote:
> > @@ -4135,7 +4141,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu)
> > if (local_group)
> > load = source_load(i);
> > else
> > - load = target_load(i);
> > + load = target_load(i, sd->imbalance_pct);
>
> Don't you apply imbalance_pct twice here? Later on in
> find_idlest_group() you have:
>
> if (!idlest || 100*this_load < imbalance*min_load)
> return NULL;
>
> where min_load comes from target_load().

Yes! exactly! this doesn't make any sense.


\
 
 \ /
  Last update: 2013-12-17 17:01    [W:0.842 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site