Messages in this thread Patch in this message | | | Date | Fri, 3 Jul 2020 12:40:33 +0200 | From | Peter Zijlstra <> | Subject | Re: weird loadavg on idle machine post 5.7 |
| |
On Fri, Jul 03, 2020 at 11:02:26AM +0200, Peter Zijlstra wrote: > On Thu, Jul 02, 2020 at 10:36:27PM +0100, Mel Gorman wrote: > > > > commit c6e7bd7afaeb3af55ffac122828035f1c01d1d7b (refs/bisect/bad) > > > Author: Peter Zijlstra <peterz@infradead.org> > > > Peter, I'm not supremely confident about this but could it be because > > "p->sched_contributes_to_load = !!task_contributes_to_load(p)" potentially > > happens while a task is still being dequeued? In the final stages of a > > task switch we have > > > > prev_state = prev->state; > > vtime_task_switch(prev); > > perf_event_task_sched_in(prev, current); > > finish_task(prev); > > > > finish_task is when p->on_cpu is cleared after the state is updated. > > With the patch, we potentially update sched_contributes_to_load while > > p->state is transient so if the check below is true and ttwu_queue_wakelist > > is used then sched_contributes_to_load was based on a transient value > > and potentially wrong. > > I'm not seeing it. Once a task hits schedule(), p->state doesn't change, > except through wakeup. > > And while dequeue depends on p->state, it doesn't change it. > > At this point in ttwu() we know p->on_rq == 0, which implies dequeue has > started, which means we've (at least) stopped executing the task -- we > started or finished schedule(). > > Let me stare at this more...
So ARM/Power/etc.. can speculate the load such that the task_contributes_to_load() value is from before ->on_rq.
The compiler might similar re-order things -- although I've not found it doing so with the few builds I looked at.
So I think at the very least we should do something like this. But i've no idea how to reproduce this problem.
Mel's patch placed it too far down, as the WF_ON_CPU path also relies on this, and by not resetting p->sched_contributes_to_load it would skew accounting even worse.
--- diff --git a/kernel/sched/core.c b/kernel/sched/core.c index fcd56f04b706..cba8a56d0f7f 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2799,9 +2799,6 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) } #ifdef CONFIG_SMP - p->sched_contributes_to_load = !!task_contributes_to_load(p); - p->state = TASK_WAKING; - /* * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be * possible to, falsely, observe p->on_cpu == 0. @@ -2823,6 +2820,9 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) */ smp_rmb(); + p->sched_contributes_to_load = !!task_contributes_to_load(p); + p->state = TASK_WAKING; + /* * If the owning (remote) CPU is still in the middle of schedule() with * this task as prev, considering queueing p on the remote CPUs wake_list
| |