lkml.org 
[lkml]   [2021]   [Dec]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
SubjectRe: [RT] BUG in sched/cpupri.c
Date
On 20/12/21 18:35, Dietmar Eggemann wrote:
> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index ef8228d19382..798887f1eeff 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -1895,9 +1895,17 @@ static int push_rt_task(struct rq *rq, bool pull)
> struct task_struct *push_task = NULL;
> int cpu;
>
> + if (WARN_ON_ONCE(!rt_task(rq->curr))) {
> + printk("next_task=[%s %d] rq->curr=[%s %d]\n",
> + next_task->comm, next_task->pid, rq->curr->comm, rq->curr->pid);
> + }
> +
> if (!pull || rq->push_busy)
> return 0;
>
> + if (!rt_task(rq->curr))
> + return 0;
> +

If current is a DL/stopper task, why not; if that's CFS (which IIUC is your
case), that's buggered: we shouldn't be trying to pull RT tasks when we
have queued RT tasks and a less-than-RT current, we should be rescheduling
right now.

I'm thinking this can happen via rt_mutex_setprio() when we demote an RT-boosted
CFS task (or straight up sched_setscheduler()):
check_class_changed()->switched_from_rt() doesn't trigger a resched_curr(),
so I suspect we get to the push/pull callback before getting a
resched (I actually don't see where we'd get a resched in that case other
than at the next tick).

IOW, feels like we want the below. Unfortunately I can't reproduce the
issue locally (yet), so that's untested.

---
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index fd7c4f972aaf..7d61ceec1a3b 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -2467,10 +2467,13 @@ static void switched_from_dl(struct rq *rq, struct task_struct *p)
* this is the right place to try to pull some other one
* from an overloaded CPU, if any.
*/
- if (!task_on_rq_queued(p) || rq->dl.dl_nr_running)
+ if (!task_on_rq_queued(p))
return;

- deadline_queue_pull_task(rq);
+ if (!rq->dl.dl_nr_running)
+ deadline_queue_pull_task(rq);
+ else if (task_current(rq, p) && (p->sched_class < &dl_sched_class))
+ resched_curr(rq);
}

/*
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index ef8228d19382..1ea2567612fb 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -2322,10 +2322,13 @@ static void switched_from_rt(struct rq *rq, struct task_struct *p)
* we may need to handle the pulling of RT tasks
* now.
*/
- if (!task_on_rq_queued(p) || rq->rt.rt_nr_running)
+ if (!task_on_rq_queued(p))
return;

- rt_queue_pull_task(rq);
+ if (!rq->rt.rt_nr_running)
+ rt_queue_pull_task(rq);
+ else if (task_current(rq, p) && (p->sched_class < &rt_sched_class))
+ resched_curr(rq);
}

void __init init_sched_rt_class(void)
\
 
 \ /
  Last update: 2021-12-21 17:12    [W:0.078 / U:0.276 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site