lkml.org 
[lkml]   [2013]   [Nov]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH] perf top: Make -g refer to callchains
    On Mon, Nov 18, 2013 at 03:26:53PM +0100, Ingo Molnar wrote:
    >
    > * Jiri Olsa <jolsa@redhat.com> wrote:
    >
    > > On Mon, Nov 18, 2013 at 09:59:45AM -0300, Arnaldo Carvalho de Melo wrote:
    > > > Em Fri, Nov 15, 2013 at 06:46:09AM +0100, Ingo Molnar escreveu:
    > > > > btw., here's some 'perf top' call graph performance and profiling
    > > > > quality feedback, with the latest perf code:
    > > > >
    > > > > 'perf top --call-graph fp' now works very well, using just 0.2%
    > > > > of CPU time on a fast system:
    > > > >
    > > > > 4676 mingo 20 0 612m 56m 9948 S 1 0.2 0:00.68 perf
    > > > >
    > > > > 'perf top --call-graph dwarf' on the other hand is horrendously
    > > > > slow, using 20% of CPU time on a 4 GHz CPU:
    > > > >
    > > > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
    > > > > 4646 mingo 20 0 658m 81m 12m R 19 0.3 0:18.17 perf
    > > > >
    > > > > On another system with a 2.4GHz CPU it's taking up 100% of CPU
    > > > > time (!):
    > > > >
    > > > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
    > > > > 8018 mingo 20 0 290320 45220 8520 R 99.5 0.3 0:58.81 perf
    > > > >
    > > > > Profiling 'perf top' shows all sorts of very high dwarf
    > > > > processing overhead:
    > > >
    > > > Yeah, top dwarf callchain has been so far a proof of concept, it
    > > > exacerbates problems that can be seen on 'report', but since its
    > > > live, we can see it more clearly.
    > > >
    > > > The work on improving callchain processing, (rb_tree'ing, new comm
    > > > infrastructure) alleviated the problem a bit.
    > > >
    > > > Tuning the stack size requested from the kernel and using
    > > > --max-stack can help when it is really needed, but yes, work on it
    > > > is *badly* needed.
    > >
    > > agreed ;-)
    > >
    > > also there's new remote unwind interface recently added into libdw,
    > > which seems to be faster than libunwind.
    > >
    > > I plan on adding this soon.
    >
    > If the main source of overhead is libunwind (which needs independent
    > confirmation) then would it make sense to implement dwarf stack unwind
    > support ourselves?
    >
    > I think SysProf does that and it appears to be faster - its unwind.c
    > is only 400 lines long as it only implements the small subset needed
    > to walk the stack - AFAICS.

    I think it's an option.. but it'll simpler to try the libdw
    interface first and see if it's good/fast enough..

    also I recall discussing the speed with libdw developer
    Jan Kratochvil (CC-ed) and AFAICS they're open for
    suggestions/optimizations

    jirka


    \
     
     \ /
      Last update: 2013-11-18 19:01    [W:4.725 / U:0.144 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site