lkml.org 
[lkml]   [2022]   [Nov]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Crash with PREEMPT_RT on aarch64 machine
On 2022-11-07 17:30:16 [+0100], Jan Kara wrote:
> On Mon 07-11-22 16:10:34, Sebastian Andrzej Siewior wrote:
> > + locking, arm64
> >
> > On 2022-11-07 14:56:36 [+0100], Jan Kara wrote:
> > > > spinlock_t and raw_spinlock_t differ slightly in terms of locking.
> > > > rt_spin_lock() has the fast path via try_cmpxchg_acquire(). If you
> > > > enable CONFIG_DEBUG_RT_MUTEXES then you would force the slow path which
> > > > always acquires the rt_mutex_base::wait_lock (which is a raw_spinlock_t)
> > > > while the actual lock is modified via cmpxchg.
> > >
> > > So I've tried enabling CONFIG_DEBUG_RT_MUTEXES and indeed the corruption
> > > stops happening as well. So do you suspect some bug in the CPU itself?
> >
> > If it is only enabling CONFIG_DEBUG_RT_MUTEXES (and not whole lockdep)
> > then it looks very suspicious.
>
> Just to confirm, CONFIG_DEBUG_RT_MUTEXES is the only thing I've enabled and
> the list corruption disappeared.

I don't know if this works but:
if you tell task_struct_cachep to be created with SLAB_CACHE_DMA32 then
the pointer should only have the lower 32bit set. With this could make
rt_mutex_base::owner an atomic_t type. You could then replace
try_cmpxchg_acquire() with atomic_try_cmpxchg_acquire() and do the 32bit
cmpxchg. You would then need set the const upper 32bit of the pointer
while returning the pointer.
I have no idea how much sense it makes but you would avoid the 64bit
cmpxchg making those two a little more alike :)

> Honza
>

Sebastian

\
 
 \ /
  Last update: 2022-11-07 18:13    [W:0.108 / U:0.044 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site