lkml.org 
[lkml]   [2013]   [Jun]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [Resend patch v8 0/13] use runnable load in schedule balance
On 06/20/2013 10:18 AM, Alex Shi wrote:
> Resend patchset for more convenient pick up.
> This patch set combine 'use runnable load in balance' serials and 'change
> 64bit variables to long type' serials. also collected Reviewed-bys, and
> Tested-bys.
>
> The only changed code is fixing load to load_avg convert in UP mode, which
> found by PeterZ in task_h_load().
>
> Paul still has some concern of blocked_load_avg out of balance consideration.
> but I didn't see the blocked_load_avg usage was thought through now, or some
> strong reason to make it into balance.
> So, according to benchmarks testing result I keep patches unchanged.

Ingo & Peter,

This patchset was discussed spread and deeply.

Now just 6th/8th patch has some arguments on them, Paul think it is
better to consider blocked_load_avg in balance, since it is helpful on
some scenarios, but I think on most of scenarios, the blocked_load_avg
just cause load imbalance among cpus. and plus testing show with
blocked_load_avg the performance is just worse on some benchmarks. So, I
still prefer to keep it out of balance.

http://www.mail-archive.com/linux-kernel@vger.kernel.org/msg455196.html

Is it the time to do the decision or give more comments? Thanks!
>
> Regards
> Alex
>
> [Resend patch v8 01/13] Revert "sched: Introduce temporary
> [Resend patch v8 02/13] sched: move few runnable tg variables into
> [Resend patch v8 03/13] sched: set initial value of runnable avg for
> [Resend patch v8 04/13] sched: fix slept time double counting in
> [Resend patch v8 05/13] sched: update cpu load after task_tick.
> [Resend patch v8 06/13] sched: compute runnable load avg in cpu_load
> [Resend patch v8 07/13] sched: consider runnable load average in
> [Resend patch v8 08/13] sched/tg: remove blocked_load_avg in balance
> [Resend patch v8 09/13] sched: change cfs_rq load avg to unsigned
> [Resend patch v8 10/13] sched/tg: use 'unsigned long' for load
> [Resend patch v8 11/13] sched/cfs_rq: change atomic64_t removed_load
> [Resend patch v8 12/13] sched/tg: remove tg.load_weight
> [Resend patch v8 13/13] sched: get_rq_runnable_load() can be static
>


--
Thanks
Alex


\
 
 \ /
  Last update: 2013-06-24 06:01    [W:0.385 / U:4.932 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site