lkml.org 
[lkml]   [2022]   [Sep]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v3] perf tools: Get a perf cgroup more portably in BPF
On Thu, Sep 22, 2022 at 1:53 PM Namhyung Kim <namhyung@kernel.org> wrote:
>
> The perf_event_cgrp_id can be different on other configurations.
> To be more portable as CO-RE, it needs to get the cgroup subsys id
> using the bpf_core_enum_value() helper.
>
> Suggested-by: Ian Rogers <irogers@google.com>
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> ---
> v3 changes)
> * check compiler features for enum value
>
> v2 changes)
> * fix off_cpu.bpf.c too
> * get perf_subsys_id only once
>
> tools/perf/util/bpf_skel/bperf_cgroup.bpf.c | 11 ++++++++++-
> tools/perf/util/bpf_skel/off_cpu.bpf.c | 12 ++++++++----
> 2 files changed, 18 insertions(+), 5 deletions(-)
>
> diff --git a/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c b/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
> index 292c430768b5..8e7520e273db 100644
> --- a/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
> +++ b/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
> @@ -48,6 +48,7 @@ const volatile __u32 num_cpus = 1;
>
> int enabled = 0;
> int use_cgroup_v2 = 0;
> +int perf_subsys_id = -1;
>
> static inline int get_cgroup_v1_idx(__u32 *cgrps, int size)
> {
> @@ -58,7 +59,15 @@ static inline int get_cgroup_v1_idx(__u32 *cgrps, int size)
> int level;
> int cnt;
>
> - cgrp = BPF_CORE_READ(p, cgroups, subsys[perf_event_cgrp_id], cgroup);
> + if (perf_subsys_id == -1) {
> +#if __has_builtin(__builtin_preserve_enum_value)
> + perf_subsys_id = bpf_core_enum_value(enum cgroup_subsys_id,
> + perf_event_cgrp_id);
> +#else
> + perf_subsys_id = perf_event_cgrp_id;
> +#endif
> + }
> + cgrp = BPF_CORE_READ(p, cgroups, subsys[perf_subsys_id], cgroup);
> level = BPF_CORE_READ(cgrp, level);
>
> for (cnt = 0; i < MAX_LEVELS; i++) {
> diff --git a/tools/perf/util/bpf_skel/off_cpu.bpf.c b/tools/perf/util/bpf_skel/off_cpu.bpf.c
> index c4ba2bcf179f..e917ef7b8875 100644
> --- a/tools/perf/util/bpf_skel/off_cpu.bpf.c
> +++ b/tools/perf/util/bpf_skel/off_cpu.bpf.c
> @@ -94,6 +94,8 @@ const volatile bool has_prev_state = false;
> const volatile bool needs_cgroup = false;
> const volatile bool uses_cgroup_v1 = false;
>
> +int perf_subsys_id = -1;
> +
> /*
> * Old kernel used to call it task_struct->state and now it's '__state'.
> * Use BPF CO-RE "ignored suffix rule" to deal with it like below:
> @@ -119,11 +121,13 @@ static inline __u64 get_cgroup_id(struct task_struct *t)
> {
> struct cgroup *cgrp;
>
> - if (uses_cgroup_v1)
> - cgrp = BPF_CORE_READ(t, cgroups, subsys[perf_event_cgrp_id], cgroup);
> - else
> - cgrp = BPF_CORE_READ(t, cgroups, dfl_cgrp);
> + if (!uses_cgroup_v1)
> + return BPF_CORE_READ(t, cgroups, dfl_cgrp, kn, id);
> +
> + if (perf_subsys_id == -1)
> + perf_subsys_id = bpf_core_enum_value(enum cgroup_subsys_id, perf_event_cgrp_id);

Should the "#if __has_builtin(__builtin_preserve_enum_value)" test also be here?

It feels a shame that bpf_core_enum_value isn't defined something like:

#if __has_builtin(__builtin_preserve_enum_value)
#define bpf_core_enum_value(enum_type, enum_value) \
__builtin_preserve_enum_value(*(typeof(enum_type) *)enum_value,
BPF_ENUMVAL_VALUE)
#else
#define bpf_core_enum_value(enum_type, enum_value) enum_value
#endif

for backward clang compatibility, but I could see why an error would
be preferable.

Thanks,
Ian

>
> + cgrp = BPF_CORE_READ(t, cgroups, subsys[perf_subsys_id], cgroup);
> return BPF_CORE_READ(cgrp, kn, id);
> }
>
> --
> 2.37.3.998.g577e59143f-goog
>

\
 
 \ /
  Last update: 2022-09-23 01:56    [W:0.041 / U:0.036 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site