Messages in this thread Patch in this message | | | Date | Mon, 06 May 2013 13:39:53 +0800 | From | Alex Shi <> | Subject | Re: [PATCH v5 7/7] sched: consider runnable load average in effective_load |
| |
On 05/06/2013 11:34 AM, Michael Wang wrote: >> > @@ -3045,7 +3045,7 @@ static long effective_load(struct task_group *tg, int cpu, long wl, long wg) >> > /* >> > * w = rw_i + @wl >> > */ >> > - w = se->my_q->load.weight + wl; >> > + w = se->my_q->tg_load_contrib + wl; > I've tested the patch set, seems like the last patch caused big > regression on pgbench: > > base patch 1~6 patch 1~7 > | db_size | clients | tps | | tps | | tps | > +---------+---------+-------+ +-------+ +-------+ > | 22 MB | 32 | 43420 | | 53387 | | 41625 | > > I guess some magic thing happened in effective_load() while calculating > group decay combined with load decay, what's your opinion?
thanks for testing, Michael!
Maybe 2 fix worth to try.
1, change back the tg_weight in calc_tg_weight() to use tg_load_contrib not direct load.
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 6f4f14b..c770f8d 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1037,8 +1037,8 @@ static inline long calc_tg_weight(struct task_group *tg, struct cfs_rq *cfs_rq) * update_cfs_rq_load_contribution(). */ tg_weight = atomic64_read(&tg->load_avg); - tg_weight -= cfs_rq->tg_load_contrib; - tg_weight += cfs_rq->load.weight; + //tg_weight -= cfs_rq->tg_load_contrib; + //tg_weight += cfs_rq->load.weight; return tg_weight; } 2, another try is follow the current calc_tg_weight, so remove the follow change.
>> > @@ -3045,7 +3045,7 @@ static long effective_load(struct task_group *tg, int cpu, long wl, long wg) >> > /* >> > * w = rw_i + @wl >> > */ >> > - w = se->my_q->load.weight + wl; >> > + w = se->my_q->tg_load_contrib + wl;
Would you like to try them?
-- Thanks Alex
| |