lkml.org 
[lkml]   [2023]   [Dec]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2] rt_spin_lock: To list the correct owner of rt_spin_lock
On Thu, Dec 07, 2023 at 10:31:30PM +0530, <Mintu Patel> wrote:
> On Wed, Dec 06, 2023 at 07:58:37PM +0100, Peter Zijlstra wrote:
> > On Mon, Jun 27, 2022 at 09:41:38PM +0530, Mintu Patel wrote:
> > > rt_spin_lock is actually mutex on RT Kernel so it goes for contention
> > > for lock. Currently owners of rt_spin_lock are decided before actual
> > > acquiring of lock. This patch would depict the correct owner of
> > > rt_spin_lock. The patch would help in solving crashes and deadlock
> > > due to race condition of lock
> > >
> > > acquiring rt_spin_lock acquired the lock released the lock
> > > <--------> <------->
> > > contention period Held period
> > >
> > > Thread1 Thread2
> > > _try_to_take_rt_mutex+0x95c+0x74 enqueue_task_dl+0x8cc/0x8dc
> > > rt_spin_lock_slowlock_locked+0xac+2 rt_mutex_setprio+0x28c/0x574
> > > rt_spin_lock_slowlock+0x5c/0x90 task_blocks_rt_mutex+0x240/0x310
> > > rt_spin_lock+0x58/0x5c rt_spin_lock_slowlock_locked+0xac/0x2
> > > driverA_acquire_lock+0x28/0x56 rt_spin_lock_slowlock+0x5c/0x90
> > > rt_spin_lock+0x58/0x5c
> > > driverB_acquire_lock+0x48/0x6c
> > >
> > > As per above call traces sample, Thread1 acquired the rt_spin_lock and
> > > went to critical section on the other hand Thread2 kept trying to acquire
> > > the same rt_spin_lock held by Thread1 ie contention period is too high.
> > > Finally Thread2 entered to dl queue due to high held time of the lock by
> > > Thread1. The below patch would help us to know the correct owner of
> > > rt_spin_lock and point us the driver's critical section. Respective
> > > driver need to be debugged for longer held period of lock.
> > >
> > > ex: cat /sys/kernel/debug/tracing/trace
> > >
> > > kworker/u13:0-150 [003] .....11 202.761025: rt_spinlock_acquire:
> > > Process: kworker/u13:0 is acquiring lock: &kbdev->hwaccess_lock
> > > kworker/u13:0-150 [003] .....11 202.761039: rt_spinlock_acquired:
> > > Process: kworker/u13:0 has acquired lock: &kbdev->hwaccess_lock
> > > kworker/u13:0-150 [003] .....11 202.761042: rt_spinlock_released:
> > > Process: kworker/u13:0 has released lock: &kbdev->hwaccess_lock
> > >
> >
> > The above is word salad and makes no sense. No other lock has special
> > tracing like this, so rt_lock doesn't need it either.
> >
> Hi Peter,
>
> As per current implementation of rt_spin_lock tracing mechanism on RTLinux,
> if more than one threads are trying to acquire a rt_spin_lock,
> then multiple threads are assigned as owners of the same lock, more over
> only one thread is actual owner of lock and others are still
> contending for the same lock. Such trace logs can mislead the developers
> during debugging of critical issues like deadlock, crashes etc
>
> The above patch would generate rt_spin_lock locking traces which would
> depict correct owner of the lock and other thread details which
> are trying to acquire the lock.
>
> Regards,
> Mintu Patel

Hi Peter,

Hope you got a chance to check the reply.

Regards,
Mintu Patel

\
 
 \ /
  Last update: 2023-12-21 09:03    [W:0.054 / U:0.024 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site