lkml.org 
[lkml]   [2022]   [Jul]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [perf-tools] Build-error in tools/perf/util/annotate.c with LLVM-14
On Sun, Jul 3, 2022 at 6:51 PM Andres Freund <andres@anarazel.de> wrote:
>
> Hi,
>
> On 2022-07-03 13:54:41 +0200, Sedat Dilek wrote:
> > Andres, you have some test-cases how you verified the built perf is OK?
>
> I ran an intentionally expensive workload, monitored it with bpftrace, then
> took a perf profile. Then annotated the bpf "function" and verified it looked
> the same before / after, using a perf built in a container (and thus
> compiling).
>
>
> Similar with bpftool, I dumped a jited program with a bpftool built with /
> without the patches (inside the container using nsenter for the version
> without the patches, so I could build it, using nsenter -t $pid -m -p) and
> compared both the json and non-json output before / after.
>
> V=4; nsenter -t 847325 -m -p /usr/src/linux/tools/bpf/bpftool/bpftool -j -d prog dump jited id 22 > /tmp/22.jit.json.$V; nsenter -t 847325 -m -p /usr/src/linux/tools/bpf/bpftool/bpftool -d prog dump jited id 22 > /tmp/22.jit.txt.$V
>
> and then diffed the results.
>
>
> bpf_jit_disasm was harder, because bpf_jit_enable = 2 is broken currently. So
> I gathered output in a VM from an older kernel, and used bpf_jit_disasm -f ...
> before / after the patches.
>

My test-case was to build a Linux v5.19-rc4 plus custom patches
including your v1 patchset.

Using my selfmade perf:

$ ~/bin/perf -vv
perf version 5.19.0-rc4
dwarf: [ on ] # HAVE_DWARF_SUPPORT
dwarf_getlocations: [ on ] # HAVE_DWARF_GETLOCATIONS_SUPPORT
glibc: [ on ] # HAVE_GLIBC_SUPPORT
syscall_table: [ on ] # HAVE_SYSCALL_TABLE_SUPPORT
libbfd: [ on ] # HAVE_LIBBFD_SUPPORT
debuginfod: [ OFF ] # HAVE_DEBUGINFOD_SUPPORT
libelf: [ on ] # HAVE_LIBELF_SUPPORT
libnuma: [ on ] # HAVE_LIBNUMA_SUPPORT
numa_num_possible_cpus: [ on ] # HAVE_LIBNUMA_SUPPORT
libperl: [ on ] # HAVE_LIBPERL_SUPPORT
libpython: [ on ] # HAVE_LIBPYTHON_SUPPORT
libslang: [ on ] # HAVE_SLANG_SUPPORT
libcrypto: [ on ] # HAVE_LIBCRYPTO_SUPPORT
libunwind: [ on ] # HAVE_LIBUNWIND_SUPPORT
libdw-dwarf-unwind: [ on ] # HAVE_DWARF_SUPPORT
zlib: [ on ] # HAVE_ZLIB_SUPPORT
lzma: [ on ] # HAVE_LZMA_SUPPORT
get_cpuid: [ on ] # HAVE_AUXTRACE_SUPPORT
bpf: [ on ] # HAVE_LIBBPF_SUPPORT
aio: [ on ] # HAVE_AIO_SUPPORT
zstd: [ on ] # HAVE_ZSTD_SUPPORT
libpfm4: [ OFF ] # HAVE_LIBPFM

make-line:

/home/dileks/bin/perf stat make V=1 -j4 LLVM=1 LLVM_IAS=1
PAHOLE=/opt/pahole/bin/pahole LOCALVERSION=-1-amd64-clang
14-lto KBUILD_BUILD_HOST=iniza KBUILD_BUILD_USER=sedat.dilek@gmail.com
KBUILD_BUILD_TIMESTAMP=2022-07-03 bindeb-pkg
KDEB_PKGVERSION=5.19.0~rc4-1~bookworm+dileks1

Performance counter stats for 'make V=1 -j4 LLVM=1 LLVM_IAS=1
PAHOLE=/opt/pahole/bin/pahole LOCALVERSION=-1-amd64-clang14-lto
KBUILD_BUILD_HOST=iniza KBUILD_BUILD_USER=sedat.dilek@gmail.com
KBUILD_BUILD_TIMESTAMP=2022-07-03 bindeb-pkg
KDEB_PKGVERSION=5.19.0~rc4-1~bookworm+dileks1':

49180053.86 msec task-clock # 3.371 CPUs
utilized
11647016 context-switches # 236.824 /sec
341509 cpu-migrations # 6.944 /sec
341092829 page-faults # 6.936 K/sec
86858202428205 cycles # 1.766 GHz
63272333662538 stalled-cycles-frontend # 72.85% frontend
cycles idle
45610931269521 stalled-cycles-backend # 52.51% backend
cycles idle
58841762567958 instructions # 0.68 insn per
cycle
# 1.08 stalled
cycles per insn
10469937534492 branches # 212.890 M/sec
558492683589 branch-misses # 5.33% of all
branches

14587.639724247 seconds time elapsed

45568.184531000 seconds user
3656.227306000 seconds sys

Hmmm, it took a bit longer as usual.

But hey:

$ cat /proc/version
Linux version 5.19.0-rc4-1-amd64-clang14-lto
(sedat.dilek@gmail.com@iniza) (dileks clang version 14.0.5
(https://github.com/llvm/llvm-project.git
c12386ae247c0d46e1d513942e322e3a0510b126), LLD 14.0.5)
#1~bookworm+dileks1 SMP PREEMPT_DYNAMIC 2022-07-03

-Sedat-

\
 
 \ /
  Last update: 2022-07-03 22:41    [W:0.180 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site