lkml.org 
[lkml]   [2018]   [Sep]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [RFCv2 00/48] perf tools: Add threads to record command
    On Thu, Sep 13, 2018 at 07:10:35PM +0300, Alexey Budankov wrote:
    > Hi,
    >
    > On 13.09.2018 15:54, Jiri Olsa wrote:
    > > hi,
    > > sending *RFC* for threads support in perf record command.
    > >
    > > In big picture this patchset adds perf record --threads
    > > option that allows to create threads in following modes:
    > >
    > > 1) single thread mode (current)
    > >
    > > $ perf record ...
    > > $ perf record --threads=1 ...
    > >
    > > - all maps are read/stored under process thread
    > >
    > > 2) mode with specific (X) number of threads
    > >
    > > $ perf record --threads=X ...
    > >
    > > - maps are spread equaly among threads
    > >
    > > 3) mode that creates thread for every monitored memory map
    > >
    > > $ perf record --threads ...
    > >
    > > - which in perf record is equal to number of CPUs, and
    > > it pins each thread to its map's cpu:
    > >
    > > 4) TODO - NUMA aware threads/maps separation
    > > ...
    > >
    > > The perf.data stays as a single file.
    > >
    > > v2 changes:
    > > - rebased to current Arnaldo's perf/core
    > > (also based on few fixes from my perf/core, see the branch details below)
    > >
    > > This patchset contains lot of preparation changes to make
    > > threaded record possible:
    > >
    > > - Namhyung's changes to create multiple data streams in
    > > perf data file, which allows having each thread data
    > > being stored in separate files and merged into single
    > > perf data after
    > >
    > > - Namhyung's changes to create track mmaps for auxiliary
    > > events
    > >
    > > - Namhyung's changes to search for threads/mmaps/comms
    > > using the time. This is needed because we have now
    > > multiple data streams which are processed separately,
    > > but they all need access to complete auxiliary events
    > > data (threads/mmaps/comms). That's also a reason why
    > > the auxiliary events are stored into separate data
    > > stream, which is processed before real data.
    > >
    > > - the rest of the code that adds threads abstraction into
    > > record command allows to create them and distribute maps
    > > among them
    > >
    > > - other preparational changes
    > >
    > > The threaded monitoring currently can't monitor backward maps
    > > and there are probably more limitations which I haven't spotted
    > > yet.
    > >
    > > So far I tested on laptop:
    > > http://people.redhat.com/~jolsa/record_threads/test-4CPU.txt
    > >
    > > and a one bigger server:
    > > http://people.redhat.com/~jolsa/record_threads/test-208CPU.txt
    > >
    > > I can see decrease in recorded LOST events, but both the benchmark
    > > and the monitoring must be carefully configured wrt:
    > > - number of events (frequency)
    > > - size of the memory maps
    > > - size of events (callchains)
    > > - final perf.data size
    > >
    > > It's also available in:
    > > git://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git
    > > perf/record_threads
    > >
    > > thoughts? ;-) thanks
    > > jirka
    >
    > It is preferable to split into smaller pieces that bring
    > some improvement proved by metrics numbers and ready for
    > merging and upstream. Do we have more metrics than the
    > data loss from trace AIO patches?

    well the primary focus is to get more events in,
    so the LOST metric is the main one

    >
    > There is usage of Posix threading API but there is no
    > its implementation in the patch series, to avoid dependency
    > on externally coded designs in the core of the tool.

    well, we use pthreads in here, bt it's really not that
    much code.. we could make that generic in future if needed

    jirka

    \
     
     \ /
      Last update: 2018-09-14 10:27    [W:4.145 / U:0.172 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site