Messages in this thread | | | Date | Thu, 13 Sep 2018 09:53:39 +0200 | From | Jiri Olsa <> | Subject | Re: [PATCH] perf: Prevent recursion in ring buffer |
| |
On Thu, Sep 13, 2018 at 09:40:42AM +0200, Peter Zijlstra wrote: > On Wed, Sep 12, 2018 at 09:33:17PM +0200, Jiri Olsa wrote: > > > # perf record -e 'sched:sched_switch,sched:sched_wakeup' perf bench sched messaging > > > The reason for the corruptions are some of the scheduling tracepoints, > > that have __perf_task dfined and thus allow to store data to another > > cpu ring buffer: > > > > sched_waking > > sched_wakeup > > sched_wakeup_new > > sched_stat_wait > > sched_stat_sleep > > sched_stat_iowait > > sched_stat_blocked > > > And then iterates events of the 'task' and store the sample > > for any task's event that passes tracepoint checks: > > > > ctx = rcu_dereference(task->perf_event_ctxp[perf_sw_context]); > > > > list_for_each_entry_rcu(event, &ctx->event_list, event_entry) { > > if (event->attr.type != PERF_TYPE_TRACEPOINT) > > continue; > > if (event->attr.config != entry->type) > > continue; > > > > perf_swevent_event(event, count, &data, regs); > > } > > > > Above code can race with same code running on another cpu, > > ending up with 2 cpus trying to store under the same ring > > buffer, which is not handled at the moment. > > It can yes, however the only way I can see this breaking is if we use > !inherited events with a strict per-task buffer, but your record command > doesn't use that. > > Now, your test-case uses inherited events, which would all share the > buffer, however IIRC inherited events require per-task-per-cpu buffers,
that's what perf record always does when monitoring task.. there's an event/rb for each cpu and the given task
and all events for the task (sched:*) on given cpu share that single cpu ring buffer via PERF_EVENT_IOC_SET_OUTPUT
> because there is already no guarantee the various tasks run on the same > CPU in the first place. > > This means we _should_ write to the @task's local CPU buffer, and that > would work again. > > Let me try and figure out where this is going wrong.
| |