lkml.org 
[lkml]   [2018]   [Sep]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v8 0/3]: perf: reduce data loss when profiling highly parallel CPU bound workloads
On Mon, Sep 10, 2018 at 12:13:25PM +0200, Ingo Molnar wrote:
>
> * Jiri Olsa <jolsa@redhat.com> wrote:
>
> > On Mon, Sep 10, 2018 at 12:03:03PM +0200, Ingo Molnar wrote:
> > >
> > > * Jiri Olsa <jolsa@redhat.com> wrote:
> > >
> > > > > Per-CPU threading the record session would have so many other advantages as well (scalability,
> > > > > etc.).
> > > > >
> > > > > Jiri did per-CPU recording patches a couple of months ago, not sure how usable they are at the
> > > > > moment?
> > > >
> > > > it's still usable, I can rebase it and post a branch pointer,
> > > > the problem is I haven't been able to find a case with a real
> > > > performance benefit yet.. ;-)
> > > >
> > > > perhaps because I haven't tried on server with really big cpu
> > > > numbers
> > >
> > > Maybe Alexey could pick up from there? Your concept looked fairly mature to me
> > > and I tried it on a big-CPU box back then and there were real improvements.
> >
> > too bad u did not share your results, it could have been already in ;-)
>
> Yeah :-/ Had a proper round of testing on my TODO, then the big box I'd have tested it on
> broke ...
>
> > let me rebase/repost once more and let's see
>
> Thanks!
>
> > I think we could benefit from both multiple threads event reading
> > and AIO writing for perf.data.. it could be merged together
>
> So instead of AIO writing perf.data, why not just turn perf.data into a directory structure
> with per CPU files? That would allow all sorts of neat future performance features such as

that's basically what the multiple-thread record patchset does

jirka

> mmap() or splice() based zero-copy.
>
> User-space post-processing can then read the files and put them into global order - or use the
> per CPU nature of them, which would be pretty useful too.
>
> Also note how well this works on NUMA as well, as the backing pages would be allocated in a
> NUMA-local fashion.
>
> I.e. the whole per-CPU threading would enable such a separation of the tracing/event streams
> and would allow true scalability.
>
> Thanks,
>
> Ingo

\
 
 \ /
  Last update: 2018-09-10 12:24    [W:0.063 / U:0.692 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site