lkml.org 
[lkml]   [2022]   [Mar]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 2/4] locking: Apply contention tracepoints in the slow path
On Tue, Mar 1, 2022 at 1:04 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Mon, Feb 28, 2022 at 05:04:10PM -0800, Namhyung Kim wrote:
> > @@ -1718,9 +1726,11 @@ static __always_inline void __sched rtlock_slowlock(struct rt_mutex_base *lock)
> > {
> > unsigned long flags;
> >
> > + trace_contention_begin(lock, _RET_IP_, LCB_F_RT | TASK_RTLOCK_WAIT);
> > raw_spin_lock_irqsave(&lock->wait_lock, flags);
> > rtlock_slowlock_locked(lock);
> > raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
> > + trace_contention_end(lock);
> > }
>
> Same, if you do it one level in, you can have the tracepoint itself look
> at current->__state.

So I tried this by reading the state in the trace like below:

+ TP_fast_assign(
+ __entry->lock_addr = lock;
+ __entry->flags = flags | get_current_state();
+ ),

But I sometimes see unrelated values which contain
__TASK_TRACED or __TASK_STOPPED and some unexpected values
like TASK_UNINTERRUPTIBLE for rwlocks. Maybe I missed something.

Anyway I think it's confusing and complicates things unnecessarily.
Probably it'd better using new flags like LCB_F_SPIN and LCB_F_WAIT.

Thanks,
Namhyung

\
 
 \ /
  Last update: 2022-03-14 22:45    [W:0.072 / U:0.640 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site