Messages in this thread | | | Date | Tue, 1 Mar 2022 09:45:23 -0500 | From | Steven Rostedt <> | Subject | Re: [PATCH 2/4] locking: Apply contention tracepoints in the slow path |
| |
On Tue, 1 Mar 2022 10:03:54 +0100 Peter Zijlstra <peterz@infradead.org> wrote:
> diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c > index 8555c4efe97c..18b9f4bf6f34 100644 > --- a/kernel/locking/rtmutex.c > +++ b/kernel/locking/rtmutex.c > @@ -1579,6 +1579,8 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock, > > set_current_state(state); > > + trace_contention_begin(lock, _RET_IP_, LCB_F_RT);
I guess one issue with this is that _RET_IP_ will return the rt_mutex address if this is not inlined, making the _RET_IP_ rather useless.
Now, if we can pass the _RET_IP_ into __rt_mutex_slowlock() then we could reference that.
-- Steve
> + > ret = task_blocks_on_rt_mutex(lock, waiter, current, ww_ctx, chwalk); > if (likely(!ret)) > ret = rt_mutex_slowlock_block(lock, ww_ctx, state, NULL, waiter); > @@ -1601,6 +1603,9 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock, > * unconditionally. We might have to fix that up. > */ > fixup_rt_mutex_waiters(lock); > + > + trace_contention_end(lock, ret); > + > return ret; > }
| |