Messages in this thread | | | Date | Fri, 25 Jan 2013 09:03:31 +0800 | Subject | Re: [PATCH 2/4] sched: compute runnable load avg in cpu_load and cpu_avg_load_per_task | From | Alex Shi <> |
| |
On Thu, Jan 24, 2013 at 11:16 PM, Alex Shi <alex.shi@intel.com> wrote: > On 01/24/2013 06:08 PM, Ingo Molnar wrote: >> >> * Alex Shi <alex.shi@intel.com> wrote: >> >>> @@ -2539,7 +2539,11 @@ static void __update_cpu_load(struct rq *this_rq, unsigned long this_load, >>> void update_idle_cpu_load(struct rq *this_rq) >>> { >>> unsigned long curr_jiffies = ACCESS_ONCE(jiffies); >>> +#if defined(CONFIG_SMP) && defined(CONFIG_FAIR_GROUP_SCHED) >>> + unsigned long load = (unsigned long)this_rq->cfs.runnable_load_avg; >>> +#else >>> unsigned long load = this_rq->load.weight; >>> +#endif >> >> I'd not make it conditional - just calculate runnable_load_avg >> all the time (even if group scheduling is disabled) and use it >> consistently. The last thing we want is to bifurcate scheduler >> balancer behavior even further. > > Very glad to see you being back, Ingo! :) > > This patch set is following my power aware scheduling patchset. But for > a separate workable runnable load engaged balancing. only needs the > other 3 patches, that already sent you at another patchset > > [patch v4 06/18] sched: give initial value for runnable avg of sched > [patch v4 07/18] sched: set initial load avg of new forked task > [patch v4 08/18] Revert "sched: Introduce temporary FAIR_GROUP_SCHED
You's right, Ingo! the last revert patch missed above 2 points. I will resend new patches with full version.
| |