lkml.org 
[lkml]   [2013]   [Jun]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [Resend patch v8 0/13] use runnable load in schedule balance
On 06/24/2013 06:40 PM, Paul Turner wrote:
>> > Ingo & Peter,
>> >
>> > This patchset was discussed spread and deeply.
>> >
>> > Now just 6th/8th patch has some arguments on them, Paul think it is
>> > better to consider blocked_load_avg in balance, since it is helpful on
>> > some scenarios, but I think on most of scenarios, the blocked_load_avg
>> > just cause load imbalance among cpus. and plus testing show with
>> > blocked_load_avg the performance is just worse on some benchmarks. So, I
>> > still prefer to keep it out of balance.
> I think you have perhaps misunderstood what I was trying to explain.
>
> I have no problems with not including blocked load in load-balance, in
> fact, I encouraged not accumulating it in an average of averages in
> CPU load.
>

Many thanks for re-clarification!
> The problem is that your current approach has removed it both from
> load-balance _and_ from shares distribution; isolation matters as much
> as performance in the cgroup case (otherwise you would just not use
> cgroups). I would expect the latter to have quite negative effects on
> fairness, this is my primary concern.
>

So the argument is just on patch 'sched/tg: remove blocked_load_avg in balance'. :)

I understand your correctness concern. but blocked_load_avg still will be decayed to zero in few hundreds ms. So such correctness needs just in few hundreds ms. (and cause performance drop)
The blocked_load_avg is decayed on same degree as runnable load, it is a bit overweight since task slept. since it may will be waken up on other cpu. So to relieve this overweight, could we use the half or a quarter weight of blocked_load_avg? like following:

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ddbc19f..395f57c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1358,7 +1358,7 @@ static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq,
struct task_group *tg = cfs_rq->tg;
s64 tg_contrib;

- tg_contrib = cfs_rq->runnable_load_avg + cfs_rq->blocked_load_avg;
+ tg_contrib = cfs_rq->runnable_load_avg + cfs_rq->blocked_load_avg / 2;
tg_contrib -= cfs_rq->tg_load_contrib;

if (force_update || abs64(tg_contrib) > cfs_rq->tg_load_contrib / 8) {
>> >
>> > http://www.mail-archive.com/linux-kernel@vger.kernel.org/msg455196.html
>> >
>> > Is it the time to do the decision or give more comments? Thanks!


--
Thanks
Alex


\
 
 \ /
  Last update: 2013-06-24 18:21    [W:0.142 / U:0.152 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site