lkml.org 
[lkml]   [2022]   [Oct]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH] perf stat: Support old kernels for bperf cgroup counting
On Fri, Sep 30, 2022 at 7:31 PM Namhyung Kim <namhyung@kernel.org> wrote:
>
> Hello,
>
> On Fri, Sep 30, 2022 at 3:48 PM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Wed, Sep 21, 2022 at 9:21 PM Namhyung Kim <namhyung@kernel.org> wrote:
> > >
> > > The recent change in the cgroup will break the backward compatiblity in
> > > the BPF program. It should support both old and new kernels using BPF
> > > CO-RE technique.
> > >
> > > Like the task_struct->__state handling in the offcpu analysis, we can
> > > check the field name in the cgroup struct.
> > >
> > > Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> > > ---
> > > Arnaldo, I think this should go through the cgroup tree since it depends
> > > on the earlier change there. I don't think it'd conflict with other
> > > perf changes but please let me know if you see any trouble, thanks!
> > >
> > > tools/perf/util/bpf_skel/bperf_cgroup.bpf.c | 29 ++++++++++++++++++++-
> > > 1 file changed, 28 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c b/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
> > > index 488bd398f01d..4fe61043de04 100644
> > > --- a/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
> > > +++ b/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
> > > @@ -43,12 +43,39 @@ struct {
> > > __uint(value_size, sizeof(struct bpf_perf_event_value));
> > > } cgrp_readings SEC(".maps");
> > >
> > > +/* new kernel cgroup definition */
> > > +struct cgroup___new {
> > > + int level;
> > > + struct cgroup *ancestors[];
> > > +} __attribute__((preserve_access_index));
> > > +
> > > +/* old kernel cgroup definition */
> > > +struct cgroup___old {
> > > + int level;
> > > + u64 ancestor_ids[];
> > > +} __attribute__((preserve_access_index));
> > > +
> > > const volatile __u32 num_events = 1;
> > > const volatile __u32 num_cpus = 1;
> > >
> > > int enabled = 0;
> > > int use_cgroup_v2 = 0;
> > >
> > > +static inline __u64 get_cgroup_v1_ancestor_id(struct cgroup *cgrp, int level)
> > > +{
> > > + /* recast pointer to capture new type for compiler */
> > > + struct cgroup___new *cgrp_new = (void *)cgrp;
> > > +
> > > + if (bpf_core_field_exists(cgrp_new->ancestors)) {
> > > + return BPF_CORE_READ(cgrp_new, ancestors[level], kn, id);
> >
> > have you checked generated BPF code for this ancestors[level] access?
> > I'd expect CO-RE relocation for finding ancestors offset and then just
> > normal + level * 8 arithmetic, but would be nice to confirm. Apart
> > from this, looks good to me:
> >
> > Acked-by: Andrii Nakryiko <andrii@kernel.org>
>
> Thanks for your review!
>
> How can I check the generated code? Do you have something works with
> skeletons or do I have to save the BPF object somehow during the build?
>

skeleton is generated from ELF BPF object file. You can do
llvm-objdump -d <obj.bpf.o> to see instructions. Unfortunately you
can't see BPF CO-RE relocations this way, you'd have to use something
like my custom tool ([0]).

But anyways, I checked locally similar code pattern and I think it's
all good from BPF CO-RE perspective. I see appropriate relocations in
all the necessary places. So this should work.

Acked-by: Andrii Nakryiko <andrii@kernel.org>

[0] https://github.com/anakryiko/btfdump

> Thanks,
> Namhyung

\
 
 \ /
  Last update: 2022-10-06 00:38    [W:0.228 / U:0.192 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site