lkml.org 
[lkml]   [2020]   [Nov]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Loadavg accounting error on arm64
On Mon, Nov 16, 2020 at 11:49:38AM +0000, Mel Gorman wrote:
> On Mon, Nov 16, 2020 at 09:10:54AM +0000, Mel Gorman wrote:
> > I'll be looking again today to see can I find a mistake in the ordering for
> > how sched_contributes_to_load is handled but again, the lack of knowledge
> > on the arm64 memory model means I'm a bit stuck and a second set of eyes
> > would be nice :(
> >
>
> This morning, it's not particularly clear what orders the visibility of
> sched_contributes_to_load exactly like other task fields in the schedule
> vs try_to_wake_up paths. I thought the rq lock would have ordered them but
> something is clearly off or loadavg would not be getting screwed. It could
> be done with an rmb and wmb (testing and hasn't blown up so far) but that's
> far too heavy. smp_load_acquire/smp_store_release might be sufficient
> on it although less clear if the arm64 gives the necessary guarantees.
>
> (This is still at the chucking out ideas as I haven't context switched
> back in all the memory barrier rules).

IIRC it should be so ordered by ->on_cpu.

We have:

schedule()
prev->sched_contributes_to_load = X;
smp_store_release(prev->on_cpu, 0);


on the one hand, and:

sched_ttwu_pending()
if (WARN_ON_ONCE(p->on_cpu))
smp_cond_load_acquire(&p->on_cpu)

ttwu_do_activate()
if (p->sched_contributes_to_load)
...

on the other (for the remote case, which is the only 'interesting' one).

\
 
 \ /
  Last update: 2020-11-16 13:55    [W:0.831 / U:0.208 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site