lkml.org 
[lkml]   [2012]   [Oct]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH 27/32] sched: Update clock of nohz busiest rq before balancing
From: Frederic Weisbecker <fweisbec@gmail.com>

move_tasks() and active_load_balance_cpu_stop() both need
to have the busiest rq clock uptodate because they may end
up calling can_migrate_task() that uses rq->clock_task
to determine if the task running in the busiest runqueue
is cache hot.

Hence if the busiest runqueue is tickless, update its clock
before reading it.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Alessio Igor Bogani <abogani@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Avi Kivity <avi@redhat.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Geoff Levand <geoff@infradead.org>
Cc: Gilad Ben Yossef <gilad@benyossef.com>
Cc: Hakan Akkan <hakanakkan@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Kevin Hilman <khilman@ti.com>
Cc: Max Krasnyansky <maxk@qualcomm.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephen Hemminger <shemminger@vyatta.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Sven-Thorsten Dietrich <thebigcorporation@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
[ Forward port conflicts ]
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
kernel/sched/fair.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f320922..a63e641 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4231,6 +4231,7 @@ static int load_balance(int this_cpu, struct rq *this_rq,
{
int ld_moved, cur_ld_moved, active_balance = 0;
int lb_iterations, max_lb_iterations;
+ int clock_updated;
struct sched_group *group;
struct rq *busiest;
unsigned long flags;
@@ -4274,6 +4275,7 @@ redo:

ld_moved = 0;
lb_iterations = 1;
+ clock_updated = 0;
if (busiest->nr_running > 1) {
/*
* Attempt to move tasks. If find_busiest_group has found
@@ -4297,6 +4299,14 @@ more_balance:
*/
cur_ld_moved = move_tasks(&env);
ld_moved += cur_ld_moved;
+
+ /*
+ * Move tasks may end up calling can_migrate_task() which
+ * requires an uptodate value of the rq clock.
+ */
+ update_nohz_rq_clock(busiest);
+ clock_updated = 1;
+
double_rq_unlock(env.dst_rq, busiest);
local_irq_restore(flags);

@@ -4392,6 +4402,13 @@ more_balance:
busiest->active_balance = 1;
busiest->push_cpu = this_cpu;
active_balance = 1;
+ /*
+ * active_load_balance_cpu_stop may end up calling
+ * can_migrate_task() which requires an uptodate
+ * value of the rq clock.
+ */
+ if (!clock_updated)
+ update_nohz_rq_clock(busiest);
}
raw_spin_unlock_irqrestore(&busiest->lock, flags);

--
1.7.10.4



\
 
 \ /
  Last update: 2012-10-30 01:41    [W:0.205 / U:2.912 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site