lkml.org 
[lkml]   [2015]   [Apr]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/2] [PATCH] sched: Add smp_rmb() in task rq locking cycles
On Sun, Apr 26, 2015 at 03:52:13AM -0700, Paul E. McKenney wrote:

> And then an smp_read_barrier_depends() would be needed either here
> or embedded in apin_unlock_wait(). But we also need to check the
> spin_unlock_wait() implementations to see if any are potentially
> vulnerable to compiler misbehavior due to lack of ACCESS_ONCE(),
> READ_ONCE(), or other sources of the required volatility:
>

> o tile: For 32-bit, looks like a bug. Compares ->current_ticket and
> ->next_ticket with no obvious protection. The compiler is free to
> load them in either order, so it is possible that the two fields
> could compare equal despite never having actually been equal at
> any given time. Needs something like arm, arm64, mips, or x86
> to do single fetch, then compare fields in quantity fetched.
>
> Except that this appears to be using int on a 32-bit system,
> thus might not have a 64-bit load. If that is the case, the
> trick would be to load them in order. Except that this can be
> defeated by overflow. Are there really 32-bit tile systems with
> enough CPUs to overflow an unsigned short?
>
> For 64-bit, a READ_ONCE() appears to be in order -- no obvious
> volatility present.
>

Chris?


\
 
 \ /
  Last update: 2015-04-28 17:01    [W:0.098 / U:0.328 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site