lkml.org 
[lkml]   [2019]   [Sep]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH 1/2] Fix: sched/membarrier: p->mm->membarrier_state racy load
On Wed, Sep 04, 2019 at 11:19:00AM -0400, Mathieu Desnoyers wrote:
> ----- On Sep 3, 2019, at 4:36 PM, Linus Torvalds torvalds@linux-foundation.org wrote:

> > I wonder if the easiest model might be to just use a percpu variable
> > instead for the membarrier stuff? It's not like it has to be in
> > 'struct task_struct' at all, I think. We only care about the current
> > runqueues, and those are percpu anyway.
>
> One issue here is that membarrier iterates over all runqueues without
> grabbing any runqueue lock. If we copy that state from mm to rq on
> sched switch prepare, we would need to ensure we have the proper
> memory barriers between:
>
> prior user-space memory accesses / setting the runqueue membarrier state
>
> and
>
> setting the runqueue membarrier state / following user-space memory accesses
>
> Copying the membarrier state into the task struct leverages the fact that
> we have documented and guaranteed those barriers around the rq->curr update
> in the scheduler.

Should be the same as the barriers we already rely on for rq->curr, no?
That is, if we put this before switch_mm() then we have
smp_mb__after_spinlock() and switch_mm() itself.

Also, if we place mm->membarrier_state in the same cacheline as mm->pgd
(which switch_mm() is bound to load) then we should be fine, I think.

\
 
 \ /
  Last update: 2019-09-04 18:10    [W:0.060 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site