Messages in this thread | | | Date | Mon, 14 Nov 2022 12:41:48 +0000 | From | Will Deacon <> | Subject | Re: Crash with PREEMPT_RT on aarch64 machine |
| |
On Fri, Nov 11, 2022 at 03:27:42PM +0100, Jan Kara wrote: > On Wed 09-11-22 16:40:23, Jan Kara wrote: > > On Wed 09-11-22 12:57:57, Will Deacon wrote: > > > On Mon, Nov 07, 2022 at 11:49:01AM -0500, Waiman Long wrote: > > > > On 11/7/22 10:10, Sebastian Andrzej Siewior wrote: > > > > > + locking, arm64 > > > > > > > > > > On 2022-11-07 14:56:36 [+0100], Jan Kara wrote: > > > > > > > spinlock_t and raw_spinlock_t differ slightly in terms of locking. > > > > > > > rt_spin_lock() has the fast path via try_cmpxchg_acquire(). If you > > > > > > > enable CONFIG_DEBUG_RT_MUTEXES then you would force the slow path which > > > > > > > always acquires the rt_mutex_base::wait_lock (which is a raw_spinlock_t) > > > > > > > while the actual lock is modified via cmpxchg. > > > > > > So I've tried enabling CONFIG_DEBUG_RT_MUTEXES and indeed the corruption > > > > > > stops happening as well. So do you suspect some bug in the CPU itself? > > > > > If it is only enabling CONFIG_DEBUG_RT_MUTEXES (and not whole lockdep) > > > > > then it looks very suspicious. > > > > > CONFIG_DEBUG_RT_MUTEXES enables a few additional checks but the main > > > > > part is that rt_mutex_cmpxchg_acquire() + rt_mutex_cmpxchg_release() > > > > > always fail (and so the slowpath under a raw_spinlock_t is done). > > > > > > > > > > So if it is really the fast path (rt_mutex_cmpxchg_acquire()) then it > > > > > somehow smells like the CPU is misbehaving. > > > > > > > > > > Could someone from the locking/arm64 department check if the locking in > > > > > RT-mutex (rtlock_lock()) is correct? > > > > > > > > > > rtmutex locking uses try_cmpxchg_acquire(, ptr, ptr) for the fastpath > > > > > (and try_cmpxchg_release(, ptr, ptr) for unlock). > > > > > Now looking at it again, I don't see much difference compared to what > > > > > queued_spin_trylock() does except the latter always operates on 32bit > > > > > value instead a pointer. > > > > > > > > Both the fast path of queued spinlock and rt_spin_lock are using > > > > try_cmpxchg_acquire(), the only difference I saw is the size of the data to > > > > be cmpxchg'ed. qspinlock uses 32-bit integer whereas rt_spin_lock uses > > > > 64-bit pointer. So I believe it is more on how the arm64 does cmpxchg. I > > > > believe there are two different ways of doing it depending on whether LSE > > > > atomics is available in the platform. So exactly what arm64 system is being > > > > used here and what hardware capability does it have? > > > > > > I'd be more inclined to be suspicious of the slowpath tbh, as we need to > > > make sure that we have acquire semantics on all paths where the lock can > > > be taken. Looking at the rtmutex code, this really isn't obvious to me -- > > > for example, try_to_take_rt_mutex() appears to be able to return via the > > > 'takeit' label without acquire semantics and it looks like we might be > > > relying on the caller's subsequent _unlock_ of the wait_lock for ordering, > > > but that will give us release semantics which aren't correct. > > > > > > As a quick hack, can you try chucking a barrier into rt_mutex_set_owner()? > > > > Bingo! This patch fixes the crashes for me. > > So I suppose this is not an official fix, is it? Sebastian, it appears to > be a bug in rtmutex implementation in the end AFAIU ;)
Right, somebody needs to go audit all the acquisition paths on the slow-path and make sure they all have acquire semantics. The trick is doing that without incurring unnecessary overhead, e.g. by making use of dependency ordering where it already exists.
Will
> > > > diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c > > > index 7779ee8abc2a..dd6a66c90f53 100644 > > > --- a/kernel/locking/rtmutex.c > > > +++ b/kernel/locking/rtmutex.c > > > @@ -98,6 +98,7 @@ rt_mutex_set_owner(struct rt_mutex_base *lock, struct task_struct *owner) > > > val |= RT_MUTEX_HAS_WAITERS; > > > > > > WRITE_ONCE(lock->owner, (struct task_struct *)val); > > > + smp_mb(); > > > } > > > > > > static __always_inline void clear_rt_mutex_waiters(struct rt_mutex_base *lock) > > Honza > -- > Jan Kara <jack@suse.com> > SUSE Labs, CR
| |