lkml.org 
[lkml]   [2022]   [Mar]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
----- On Mar 17, 2022, at 12:10 PM, rostedt rostedt@goodmis.org wrote:

> On Thu, 17 Mar 2022 09:45:28 -0400 (EDT)
> Mathieu Desnoyers <mathieu.desnoyers@efficios.com> wrote:
>
>> > *sem, bool reader)
>> > schedule();
>> > }
>> > __set_current_state(TASK_RUNNING);
>> > + trace_contention_end(sem, 0);
>>
>> So for the reader-write locks, and percpu rwlocks, the "trace contention end"
>> will always
>> have ret=0. Likewise for qspinlock, qrwlock, and rtlock. It seems to be a waste
>> of trace
>> buffer space to always have space for a return value that is always 0.
>>
>> Sorry if I missed prior dicussions of that topic, but why introduce this single
>> "trace contention begin/end" muxer tracepoint with flags rather than
>> per-locking-type
>> tracepoint ? The per-locking-type tracepoint could be tuned to only have the
>> fields
>> that are needed for each locking type.
>
> per-locking-type tracepoint will also add a bigger footprint.

If you are talking about code and data size footprint in the kernel, yes, we agree
there.

>
> One extra byte is not an issue.

The implementation uses a 32-bit integer.

But given that this only traces contention, it's probably not as important to
shrink the event size as if it would be for tracing every uncontended lock/unlock.

> This is just the tracepoints. You can still
> attach your own specific LTTng trace events that ignores the zero
> parameter, and can multiplex into specific types of trace events on your
> end.

Indeed, I could, as I do for system call entry/exit tracing. But I suspect it would
not be worth it for contended locks, because I don't expect those events to be frequent
enough in the trace to justify the added code/data footprint, as you pointed out.

>
> I prefer the current approach as it keeps the tracing footprint down.

Likewise. I just wanted to make sure this was done knowing the trace buffer vs kernel
code/data overhead trade-off.

Thanks,

Mathieu

>
> -- Steve

--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

\
 
 \ /
  Last update: 2022-03-17 17:44    [W:0.048 / U:0.088 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site