lkml.org 
[lkml]   [2018]   [Mar]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[RFC/RFT][PATCH v2 2/6] sched: idle: Do not stop the tick upfront in the idle loop
Date
From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

Push the decision whether or not to stop the tick somewhat deeper
into the idle loop.

Stopping the tick upfront leads to unpleasant outcomes in case the
idle governor doesn't agree with the timekeeping code on the duration
of the upcoming idle period. Specifically, if the tick has been
stopped and the idle governor predicts short idle, the situation is
bad regardless of whether or not the prediction is accurate. If it
is accurate, the tick has been stopped unnecessarily which means
excessive overhead. If it is not accurate, the CPU is likely to
spend too much time in the (shallow, because short idle has been
predicted) idle state selected by the governor [1].

As the first step towards addressing this problem, change the code
to make the tick stopping decision inside of the loop in do_idle().
In particular, do not stop the tick in the cpu_idle_poll() code path.
Also don't do that in tick_nohz_irq_exit() which doesn't really have
information to whether or not to stop the tick.

Link: https://marc.info/?l=linux-pm&m=150116085925208&w=2 # [1]
Link: https://tu-dresden.de/zih/forschung/ressourcen/dateien/projekte/haec/powernightmares.pdf
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---

-> v2: No changes.

---
kernel/sched/idle.c | 13 ++++++++++---
kernel/time/tick-sched.c | 2 +-
2 files changed, 11 insertions(+), 4 deletions(-)

Index: linux-pm/kernel/sched/idle.c
===================================================================
--- linux-pm.orig/kernel/sched/idle.c
+++ linux-pm/kernel/sched/idle.c
@@ -220,13 +220,17 @@ static void do_idle(void)
*/

__current_set_polling();
- tick_nohz_idle_enter();
+ tick_nohz_idle_prepare();

while (!need_resched()) {
check_pgt_cache();
rmb();

if (cpu_is_offline(cpu)) {
+ local_irq_disable();
+ tick_nohz_idle_go_idle(true);
+ local_irq_enable();
+
cpuhp_report_idle_dead();
arch_cpu_idle_dead();
}
@@ -240,10 +244,13 @@ static void do_idle(void)
* broadcast device expired for us, we don't want to go deep
* idle as we know that the IPI is going to arrive right away.
*/
- if (cpu_idle_force_poll || tick_check_broadcast_expired())
+ if (cpu_idle_force_poll || tick_check_broadcast_expired()) {
+ tick_nohz_idle_go_idle(false);
cpu_idle_poll();
- else
+ } else {
+ tick_nohz_idle_go_idle(true);
cpuidle_idle_call();
+ }
arch_cpu_idle_exit();
}

Index: linux-pm/kernel/time/tick-sched.c
===================================================================
--- linux-pm.orig/kernel/time/tick-sched.c
+++ linux-pm/kernel/time/tick-sched.c
@@ -1007,7 +1007,7 @@ void tick_nohz_irq_exit(void)
struct tick_sched *ts = this_cpu_ptr(&tick_cpu_sched);

if (ts->inidle)
- __tick_nohz_idle_enter(ts, true);
+ __tick_nohz_idle_enter(ts, false);
else
tick_nohz_full_update_tick(ts);
}
\
 
 \ /
  Last update: 2018-03-06 10:14    [W:0.109 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site