lkml.org 
[lkml]   [2013]   [Jun]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH 1/4] sched: Disable lb_bias feature for full dynticks
On Thu, Jun 20, 2013 at 10:45:38PM +0200, Frederic Weisbecker wrote:
> If we run in full dynticks mode, we currently have no way to
> correctly update the secondary decaying indexes of the CPU
> load stats as it is typically maintained by update_cpu_load_active()
> at each tick.
>
> We have an available infrastructure that handles tickless loads
> (cf: decay_load_missed) but it seems to only work for idle tickless
> loads, which only applies if the CPU hasn't run any real task but
> idle on the tickless timeslice.
>
> Until we can provide a sane mathematical solution to handle full
> dynticks loads, lets simply deactivate the LB_BIAS sched feature
> if CONFIG_NO_HZ_FULL as it is currently the only user of the decayed
> load records.
>
> The first load index that represents the current runqueue load weight
> is still maintained and usable.
>
> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Li Zhong <zhong@linux.vnet.ibm.com>
> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Alex Shi <alex.shi@intel.com>
> Cc: Paul Turner <pjt@google.com>
> Cc: Mike Galbraith <efault@gmx.de>
> Cc: Vincent Guittot <vincent.guittot@linaro.org>
> ---
> kernel/sched/fair.c | 13 +++++++++++--
> kernel/sched/features.h | 3 +++
> 2 files changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index c0ac2c3..2e8df6f 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2937,6 +2937,15 @@ static unsigned long weighted_cpuload(const int cpu)
> return cpu_rq(cpu)->load.weight;
> }
>
> +static inline int sched_lb_bias(void)
> +{
> +#ifndef CONFIG_NO_HZ_FULL
> + return sched_feat(LB_BIAS);
> +#else
> + return 0;
> +#endif
> +}
> +
> /*
> * Return a low guess at the load of a migration-source cpu weighted
> * according to the scheduling class and "nice" value.
> @@ -2949,7 +2958,7 @@ static unsigned long source_load(int cpu, int type)
> struct rq *rq = cpu_rq(cpu);
> unsigned long total = weighted_cpuload(cpu);
>
> - if (type == 0 || !sched_feat(LB_BIAS))
> + if (type == 0 || !sched_lb_bias())
> return total;
>
> return min(rq->cpu_load[type-1], total);
> @@ -2964,7 +2973,7 @@ static unsigned long target_load(int cpu, int type)
> struct rq *rq = cpu_rq(cpu);
> unsigned long total = weighted_cpuload(cpu);
>
> - if (type == 0 || !sched_feat(LB_BIAS))
> + if (type == 0 || !sched_lb_bias())
> return total;
>
> return max(rq->cpu_load[type-1], total);
> diff --git a/kernel/sched/features.h b/kernel/sched/features.h
> index 99399f8..635f902 100644
> --- a/kernel/sched/features.h
> +++ b/kernel/sched/features.h
> @@ -43,7 +43,10 @@ SCHED_FEAT(ARCH_POWER, true)
>
> SCHED_FEAT(HRTICK, false)
> SCHED_FEAT(DOUBLE_TICK, false)
> +
> +#ifndef CONFIG_NO_HZ_FULL
> SCHED_FEAT(LB_BIAS, true)
> +#endif
>
> /*
> * Decrement CPU power based on time not spent running tasks
> --
> 1.7.5.4
>



\
 
 \ /
  Last update: 2013-06-20 23:21    [W:0.079 / U:0.656 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site