lkml.org 
[lkml]   [2014]   [Oct]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v5 18/20] perf: Allocate ring buffers for inherited per-task kernel events
On Fri, Oct 24, 2014 at 10:44:54AM +0300, Alexander Shishkin wrote:
> Peter Zijlstra <peterz@infradead.org> writes:
>
> > On Mon, Oct 13, 2014 at 04:45:46PM +0300, Alexander Shishkin wrote:
> >> Normally, per-task events can't be inherited parents' ring buffers to
> >> avoid multiple events contending for the same buffer. And since buffer
> >> allocation is typically done by the userspace consumer, there is no
> >> practical interface to allocate new buffers for inherited counters.
> >>
> >> However, for kernel users we can allocate new buffers for inherited
> >> events as soon as they are created (and also reap them on event
> >> destruction). This pattern has a number of use cases, such as event
> >> sample annotation and process core dump annotation.
> >>
> >> When a new event is inherited from a per-task kernel event that has a
> >> ring buffer, allocate a new buffer for this event so that data from the
> >> child task is collected and can later be retrieved for sample annotation
> >> or core dump inclusion. This ring buffer is released when the event is
> >> freed, for example, when the child task exits.
> >>
> >
> > This causes a pinned memory explosion, not at all nice that.
> >
> > I think I see why and all, but it would be ever so good to not have to
> > allocate so much memory.
>
> Are there any controls we could use to limit such memory usage?

I'd say the same limit we're already accounting the mmap()s against. But
the question is; what do we do when we run out?

Will we fail clone()? That might 'surprise' quite a few people, that
their application won't work when profiled.

In any case, lets focus on the other parts of this work and delay this
feature till later.


\
 
 \ /
  Last update: 2014-10-30 10:01    [W:0.101 / U:1.212 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site