Messages in this thread Patch in this message | | | Date | Tue, 6 Aug 2013 18:08:47 +0200 | From | Oleg Nesterov <> | Subject | [PATCH v2 3/3] tracing/perf: Avoid perf_trace_buf_*() in perf_trace_##call() when possible |
| |
perf_trace_buf_prepare() + perf_trace_buf_submit(task => NULL) make no sense if hlist_empty(head). Change perf_trace_##call() to check ->perf_events beforehand and do nothing if it is empty.
This removes the overhead for tasks without events associated with them. For example, "perf record -e sched:sched_switch -p1" attaches the counter(s) to the single task, but every task in system will do perf_trace_buf_prepare/submit() just to realize that it was not attached to this event.
However, we can only do this if __task == NULL, so we also add the __builtin_constant_p(__task) check.
With this patch "perf bench sched pipe" shows approximately 4% improvement when "perf record -p1" runs in parallel, many thanks to Steven for the testing.
Signed-off-by: Oleg Nesterov <oleg@redhat.com> Tested-by: David Ahern <dsahern@gmail.com> Reviewed-by: Steven Rostedt <rostedt@goodmis.org> Acked-by: Steven Rostedt <rostedt@goodmis.org> --- include/trace/ftrace.h | 7 ++++++- 1 files changed, 6 insertions(+), 1 deletions(-)
diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h index 4163d93..5c7ab17 100644 --- a/include/trace/ftrace.h +++ b/include/trace/ftrace.h @@ -667,6 +667,12 @@ perf_trace_##call(void *__data, proto) \ int rctx; \ \ __data_size = ftrace_get_offsets_##call(&__data_offsets, args); \ + \ + head = this_cpu_ptr(event_call->perf_events); \ + if (__builtin_constant_p(!__task) && !__task && \ + hlist_empty(head)) \ + return; \ + \ __entry_size = ALIGN(__data_size + sizeof(*entry) + sizeof(u32),\ sizeof(u64)); \ __entry_size -= sizeof(u32); \ @@ -681,7 +687,6 @@ perf_trace_##call(void *__data, proto) \ \ { assign; } \ \ - head = this_cpu_ptr(event_call->perf_events); \ perf_trace_buf_submit(entry, __entry_size, rctx, __addr, \ __count, &__regs, head, __task); \ } -- 1.5.5.1
| |