lkml.org 
[lkml]   [2016]   [Jan]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCHSET 00/12] perf tools: Apply percent-limit to callchains
On Tue, Jan 26, 2016 at 01:14:47PM +0100, Jiri Olsa wrote:
> On Sun, Jan 24, 2016 at 10:53:23PM +0900, Namhyung Kim wrote:
> > Hello,
> >
> > This patchset tries to implement percent limit to callchains which was
> > requested by Andi Kleen. For some reason, limiting callchains by
> > (overhead) percentage didn't work well. This patch fixes it and make
> > --percent-limit also works for callchains as well as hist entries.
> >
> > This is available on 'perf/callchain-limit-v1' branch in my tree:
> >
> > git://git.kernel.org/pub/scm/linux/kernel/git/namhyung/linux-perf.git
> >
> > Any comments are welcome,
> >
> > Thanks,
> > Namhyung
> >
> >
> > Namhyung Kim (12):
> > perf report: Apply --percent-limit to callchains also
> > perf report: Apply callchain percent limit on --stdio
> > perf report: Get rid of hist_entry__callchain_fprintf()
> > perf report: Fix percent calculation on --stdio
> > perf report: Hide output pipe for percent-limited callchains on stdio
> > perf hists browser: Fix dump to show correct callchain style
> > perf hists browser: Fix callchain_node__count_rows()
> > perf hists browser: Apply callchain percent limit
> > perf hists browser: Fix callchain counting when press ENTER key
> > perf hists browser: Fix counting callchains when expand/collapse all
> > perf hists browser: Update percent base for fractal callchain mode
> > perf report: Fix callchain percent limit on --gtk
>
> is 0.5 the default or one has to use the --percent-limit 0.5
> for the limit to be effective?

Yes, it's effective now. I also think we need to change the default
limit of 0.5. It was set for 'fractal' mode initially AFAIK so its
percentage is relative to each node. In this case 0.5% of limit makes
sense because it'll be a very small (absolute) value.

But With 'graph' mode (now default), there're many entries under 0.5
overhead and they silently won't show callchains anymore. Actually I
was confused by it when working with this patchset.

What about 0.005% for the new default?


>
> without the option I'm getting empty callchains that are below 0.5
> but only in TUI mode (attached).. --stdio shows them all unfolded

It should not show them all. But I found that I missed a check for
a stdio case. Could you please test below?


From 9026b85537cf31af43124c957867f42b34262f2e Mon Sep 17 00:00:00 2001
From: Namhyung Kim <namhyung@kernel.org>
Date: Tue, 26 Jan 2016 21:40:39 +0900
Subject: [PATCH] perf report: Check percent limit of single callchain on stdio

While previous commit ("perf report: Apply callchain percent limit
on --stdio") checked percent limit of callchain, it missed to check a
single-path callchains. It resulted in showing callchains under the
limit if an entry has only single path of call graph.

Reported-by: Jiri Olsa <jolsa@redhat.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
tools/perf/ui/stdio/hist.c | 6 ++++++
1 file changed, 6 insertions(+)

diff --git a/tools/perf/ui/stdio/hist.c b/tools/perf/ui/stdio/hist.c
index eae25efa684e..35964579627b 100644
--- a/tools/perf/ui/stdio/hist.c
+++ b/tools/perf/ui/stdio/hist.c
@@ -199,6 +199,7 @@ static size_t callchain__fprintf_graph(FILE *fp, struct rb_root *root,
int i = 0;
int ret = 0;
char bf[1024];
+ double percent;

/*
* If have one single callchain root, don't bother printing
@@ -208,6 +209,11 @@ static size_t callchain__fprintf_graph(FILE *fp, struct rb_root *root,
node = rb_first(root);
if (node && !rb_next(node)) {
cnode = rb_entry(node, struct callchain_node, rb_node);
+
+ percent = 100.0 * callchain_cumul_hits(cnode) / total_samples;
+ if (percent < callchain_param.min_percent)
+ return 0;
+
list_for_each_entry(chain, &cnode->val, list) {
/*
* If we sort by symbol, the first entry is the same than
--
2.6.4
\
 
 \ /
  Last update: 2016-01-26 14:21    [W:0.245 / U:0.040 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site