lkml.org 
[lkml]   [2020]   [Mar]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
SubjectRe: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
From
Date
Hi Christian,

On 04/03/2020 18:42, Christian Borntraeger wrote:
>
>
> On 04.03.20 16:26, Vincent Guittot wrote:
>> On Tue, 3 Mar 2020 at 08:55, Vincent Guittot <vincent.guittot@linaro.org> wrote:
>>>
>>> On Tue, 3 Mar 2020 at 08:37, Christian Borntraeger
>>> <borntraeger@de.ibm.com> wrote:
>>>>
>>>>
>>>>
>> [...]
>>>>>>> ---
>>>>>>> kernel/sched/fair.c | 2 +-
>>>>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>
>>>>>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>>>>>> index 3c8a379c357e..beb773c23e7d 100644
>>>>>>> --- a/kernel/sched/fair.c
>>>>>>> +++ b/kernel/sched/fair.c
>>>>>>> @@ -4035,8 +4035,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
>>>>>>> __enqueue_entity(cfs_rq, se);
>>>>>>> se->on_rq = 1;
>>>>>>>
>>>>>>> + list_add_leaf_cfs_rq(cfs_rq);
>>>>>>> if (cfs_rq->nr_running == 1) {
>>>>>>> - list_add_leaf_cfs_rq(cfs_rq);
>>>>>>> check_enqueue_throttle(cfs_rq);
>>>>>>> }
>>>>>>> }
>>>>>>
>>>>>> Now running for 3 hours. I have not seen the issue yet. I can tell tomorrow if this fixes
>>>>>> the issue.
>>>>>
>>>>>
>>>>> Still running fine. I can tell for sure tomorrow, but I have the impression that this makes the
>>>>> WARN_ON go away.
>>>>
>>>> So I guess this change "fixed" the issue. If you want me to test additional patches, let me know.
>>>
>>> Thanks for the test. For now, I don't have any other patch to test. I
>>> have to look more deeply how the situation happens.
>>> I will let you know if I have other patch to test
>>
>> So I haven't been able to figure out how we reach this situation yet.
>> In the meantime I'm going to make a clean patch with the fix above.
>>
>> Is it ok if I add a reported -by and a tested-by you ?
>
> Sure-
> I just realized that this system has something special. Some month ago I created 2 slices
> $ head /etc/systemd/system/*.slice
> ==> /etc/systemd/system/machine-production.slice <==
> [Unit]
> Description=VM production
> Before=slices.target
> Wants=machine.slice
> [Slice]
> CPUQuota=2000%
> CPUWeight=1000
>
> ==> /etc/systemd/system/machine-test.slice <==
> [Unit]
> Description=VM production
> Before=slices.target
> Wants=machine.slice
> [Slice]
> CPUQuota=300%
> CPUWeight=100
>
>
> And the guests are then put into these slices. that also means that this test will never use more than the 2300%.
> No matter how much CPUs the system has.

If you could run this debug patch on top of your un-patched kernel, it would tell us which task (in the enqueue case)
and which taskgroup is causing that.

You could then further dump the appropriate taskgroup directory under the cpu cgroup mountpoint
(to see e.g. the CFS bandwidth data).

I expect more than one hit since assert_list_leaf_cfs_rq() uses SCHED_WARN_ON, hence WARN_ONCE.

--8<--
From b709758f476ee4cfc260eceedc45ebcc50d93074 Mon Sep 17 00:00:00 2001
From: Dietmar Eggemann <dietmar.eggemann@arm.com>
Date: Sat, 29 Feb 2020 11:07:05 +0000
Subject: [PATCH] test: rq->tmp_alone_branch != &rq->leaf_cfs_rq_list

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
---
kernel/sched/fair.c | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3c8a379c357e..69fc30db7440 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4619,6 +4619,15 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
break;
}

+ if (rq->tmp_alone_branch != &rq->leaf_cfs_rq_list) {
+ char path[64];
+
+ sched_trace_cfs_rq_path(cfs_rq, path, 64);
+
+ printk("CPU%d path=%s on_list=%d nr_running=%d\n",
+ cpu_of(rq), path, cfs_rq->on_list, cfs_rq->nr_running);
+ }
+
assert_list_leaf_cfs_rq(rq);

if (!se)
@@ -5320,6 +5329,18 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
}
}

+ if (rq->tmp_alone_branch != &rq->leaf_cfs_rq_list) {
+ char path[64];
+
+ cfs_rq = cfs_rq_of(&p->se);
+
+ sched_trace_cfs_rq_path(cfs_rq, path, 64);
+
+ printk("CPU%d path=%s on_list=%d nr_running=%d p=[%s %d]\n",
+ cpu_of(rq), path, cfs_rq->on_list, cfs_rq->nr_running,
+ p->comm, p->pid);
+ }
+
assert_list_leaf_cfs_rq(rq);

hrtick_update(rq);
--
2.17.1
\
 
 \ /
  Last update: 2020-03-04 20:20    [W:0.072 / U:0.256 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site