lkml.org 
[lkml]   [2022]   [Jul]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v4 12/12] sched,signal,ptrace: Rework TASK_TRACED, TASK_STOPPED state
On Tue, Jun 28, 2022 at 10:39:59PM -0500, Eric W. Biederman wrote:

> > That is, the two paths should already be synchronized, and the memory
> > barriers will not help anything inside the locks. The locking should (and
> > must) handle all that.
>
> I would presume so to. However the READ_ONCE that is going astray
> does not look like it is honoring that.
>
> So perhaps there is a bug in the s390 spin_lock barriers? Perhaps there
> is a subtle detail in the barriers that spin locks provide that we are
> overlooking?

So the thing is, s390 is, like x86, a TSO architecture with SC atomics.
Or at least it used to be; I'm not entirely solid on the Z196 features.

I've been looking at this and I can't find anything obviously wrong.
arch_spin_trylock_once() has what seems a spurious barrier() but that's
not going to cause this.

Specifically, s390 is using a simple test-and-set spinlock based on
their Compare-and-Swap (CS) instruction (so no Z196 funnies around).

Except perhaps arch_spin_unlock(), I can't grok the magic there. It does
something weird before the presumably regular TSO store of 0 into the
lock word.

Ooohh.. /me finds arch_spin_lock_queued().. *urfh* because obviously a
copy of queued spinlocks is what we need.

rwlock_t OTOH is using __atomic_add_*() and that's all Z196 magic.

Sven, does all this still reproduce if you take out
CONFIG_HAVE_MARCH_Z196_FEATURES ? Also, could you please explain the
Z196 bits or point me to the relevant section in the PoO. Additionally,
what's that _niai[48] stuff?

And I'm assuming s390 has hardware fairness on competing CS ?

\
 
 \ /
  Last update: 2022-07-05 17:46    [W:1.416 / U:0.116 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site