lkml.org 
[lkml]   [2016]   [Jan]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH for v3.18.25 3/6] sched,rt: Remove return value from pull_rt_task()
Date
From: Peter Zijlstra <peterz@infradead.org>

In order to be able to use pull_rt_task() from a callback, we need to
do away with the return value.

Since the return value indicates if we should reschedule, do this
inside the function. Since not all callers currently do this, this can
increase the number of reschedules due rt balancing.

Too many reschedules is not a correctness issues, too few are.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: ktkhai@parallels.com
Cc: rostedt@goodmis.org
Cc: juri.lelli@gmail.com
Cc: pang.xunlei@linaro.org
Cc: oleg@redhat.com
Cc: wanpeng.li@linux.intel.com
Cc: umgwanakikbuti@gmail.com
Link: http://lkml.kernel.org/r/20150611124742.679002000@infradead.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Byungchul Park <byungchul.park@lge.com>

Conflicts:
kernel/sched/rt.c
---
kernel/sched/rt.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 5a91237..ce807aa 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -244,7 +244,7 @@ int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent)

#ifdef CONFIG_SMP

-static int pull_rt_task(struct rq *this_rq);
+static void pull_rt_task(struct rq *this_rq);

static inline bool need_pull_rt_task(struct rq *rq, struct task_struct *prev)
{
@@ -399,9 +399,8 @@ static inline bool need_pull_rt_task(struct rq *rq, struct task_struct *prev)
return false;
}

-static inline int pull_rt_task(struct rq *this_rq)
+static inline void pull_rt_task(struct rq *this_rq)
{
- return 0;
}

static inline void queue_push_tasks(struct rq *rq)
@@ -1757,14 +1756,15 @@ static void push_rt_tasks(struct rq *rq)
;
}

-static int pull_rt_task(struct rq *this_rq)
+static void pull_rt_task(struct rq *this_rq)
{
- int this_cpu = this_rq->cpu, ret = 0, cpu;
+ int this_cpu = this_rq->cpu, cpu;
+ bool resched = false;
struct task_struct *p;
struct rq *src_rq;

if (likely(!rt_overloaded(this_rq)))
- return 0;
+ return;

/*
* Match the barrier from rt_set_overloaded; this guarantees that if we
@@ -1821,7 +1821,7 @@ static int pull_rt_task(struct rq *this_rq)
if (p->prio < src_rq->curr->prio)
goto skip;

- ret = 1;
+ resched = true;

deactivate_task(src_rq, p, 0);
set_task_cpu(p, this_cpu);
@@ -1837,7 +1837,8 @@ skip:
double_unlock_balance(this_rq, src_rq);
}

- return ret;
+ if (resched)
+ resched_curr(this_rq);
}

/*
@@ -1933,8 +1934,7 @@ static void switched_from_rt(struct rq *rq, struct task_struct *p)
if (!p->on_rq || rq->rt.rt_nr_running)
return;

- if (pull_rt_task(rq))
- resched_curr(rq);
+ pull_rt_task(rq);
}

void __init init_sched_rt_class(void)
--
1.9.1


\
 
 \ /
  Last update: 2016-01-05 10:41    [W:0.237 / U:0.188 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site