Messages in this thread | | | Date | Tue, 04 Jun 2013 10:44:42 +0800 | From | Alex Shi <> | Subject | Re: [DISCUSSION] removing variety rq->cpu_load ? |
| |
On 06/04/2013 10:33 AM, Michael Wang wrote: > Hi, Alex > > On 06/04/2013 09:51 AM, Alex Shi wrote: >> resend with a new subject. > > Forgive me but I'm a little lost on this thread... > > So we are planing to rely on instant 'cpu_load[0]' and decayed > 'runnable_load_avg' only, do we?
cpu_load is a kind of time decay for cpu load, but after runnable load introduced, the decay functionality is a kind duplicate with it. So, remove them will make code simple. The idea were mentioned by Paul, Peter and I.
the following is Peter's word of this affair.
> Agreed, esp. the plethora of weird idx things we currently have. If we need to > re-introduce something it would likely only be the busy case and for that we > can immediately link to the balance interval or so.
> > > Regards, > Michael Wang > >> >>> Peter, >>> >>> I just tried to remove the variety rq.cpu_load, by the following patch. >>> Because forkexec_idx and busy_idx are all zero, after the patch system just keep cpu_load[0] >>> and remove other values. >>> I tried the patch base 3.10-rc3 and latest tip/sched/core with benchmark dbench,tbench, >>> aim7,hackbench. and oltp of sysbench. Seems performance doesn't change clear. >>> So, for my tested machines, core2, NHM, SNB, with 2 or 4 CPU sockets, and above tested >>> benchmark. We are fine to remove the variety cpu_load. >>> Don't know if there some other concerns on other scenarios. >>> >>> --- >>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c >>> index 590d535..f0ca983 100644 >>> --- a/kernel/sched/fair.c >>> +++ b/kernel/sched/fair.c >>> @@ -4626,7 +4626,7 @@ static inline void update_sd_lb_stats(struct lb_env *env, >>> if (child && child->flags & SD_PREFER_SIBLING) >>> prefer_sibling = 1; >>> >>> - load_idx = get_sd_load_idx(env->sd, env->idle); >>> + load_idx = 0; //get_sd_load_idx(env->sd, env->idle); >>> >>> do { >>> int local_group; >>> >>> >> >> >
-- Thanks Alex
| |