lkml.org 
[lkml]   [2022]   [May]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH bpf-next v1 2/5] cgroup: bpf: add cgroup_rstat_updated() and cgroup_rstat_flush() kfuncs
On Fri, May 20, 2022 at 8:15 AM Yonghong Song <yhs@fb.com> wrote:
>
>
>
> On 5/19/22 6:21 PM, Yosry Ahmed wrote:
> > Add cgroup_rstat_updated() and cgroup_rstat_flush() kfuncs to bpf
> > tracing programs. bpf programs that make use of rstat can use these
> > functions to inform rstat when they update stats for a cgroup, and when
> > they need to flush the stats.
> >
> > Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
> > ---
> > kernel/cgroup/rstat.c | 35 ++++++++++++++++++++++++++++++++++-
> > 1 file changed, 34 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
> > index e7a88d2600bd..a16a851bc0a1 100644
> > --- a/kernel/cgroup/rstat.c
> > +++ b/kernel/cgroup/rstat.c
> > @@ -3,6 +3,11 @@
> >
> > #include <linux/sched/cputime.h>
> >
> > +#include <linux/bpf.h>
> > +#include <linux/btf.h>
> > +#include <linux/btf_ids.h>
> > +
> > +
> > static DEFINE_SPINLOCK(cgroup_rstat_lock);
> > static DEFINE_PER_CPU(raw_spinlock_t, cgroup_rstat_cpu_lock);
> >
> > @@ -141,7 +146,12 @@ static struct cgroup *cgroup_rstat_cpu_pop_updated(struct cgroup *pos,
> > return pos;
> > }
> >
> > -/* A hook for bpf stat collectors to attach to and flush their stats */
> > +/*
> > + * A hook for bpf stat collectors to attach to and flush their stats.
> > + * Together with providing bpf kfuncs for cgroup_rstat_updated() and
> > + * cgroup_rstat_flush(), this enables a complete workflow where bpf progs that
> > + * collect cgroup stats can integrate with rstat for efficient flushing.
> > + */
> > __weak noinline void bpf_rstat_flush(struct cgroup *cgrp,
> > struct cgroup *parent, int cpu)
> > {
> > @@ -476,3 +486,26 @@ void cgroup_base_stat_cputime_show(struct seq_file *seq)
> > "system_usec %llu\n",
> > usage, utime, stime);
> > }
> > +
> > +/* Add bpf kfuncs for cgroup_rstat_updated() and cgroup_rstat_flush() */
> > +BTF_SET_START(bpf_rstat_check_kfunc_ids)
> > +BTF_ID(func, cgroup_rstat_updated)
> > +BTF_ID(func, cgroup_rstat_flush)
> > +BTF_SET_END(bpf_rstat_check_kfunc_ids)
> > +
> > +BTF_SET_START(bpf_rstat_sleepable_kfunc_ids)
> > +BTF_ID(func, cgroup_rstat_flush)
> > +BTF_SET_END(bpf_rstat_sleepable_kfunc_ids)
> > +
> > +static const struct btf_kfunc_id_set bpf_rstat_kfunc_set = {
> > + .owner = THIS_MODULE,
> > + .check_set = &bpf_rstat_check_kfunc_ids,
> > + .sleepable_set = &bpf_rstat_sleepable_kfunc_ids,
>
> There is a compilation error here:
>
> kernel/cgroup/rstat.c:503:3: error: ‘const struct btf_kfunc_id_set’ has
> no member named ‘sleepable_set’; did you mean ‘release_set’?
> 503 | .sleepable_set = &bpf_rstat_sleepable_kfunc_ids,
> | ^~~~~~~~~~~~~
> | release_set
> kernel/cgroup/rstat.c:503:19: warning: excess elements in struct
> initializer
> 503 | .sleepable_set = &bpf_rstat_sleepable_kfunc_ids,
> | ^
> kernel/cgroup/rstat.c:503:19: note: (near initialization for
> ‘bpf_rstat_kfunc_set’)
> make[3]: *** [scripts/Makefile.build:288: kernel/cgroup/rstat.o] Error 1
>
> Please fix.

This patch series is rebased on top of 2 patches in the mailing list:
- bpf/btf: also allow kfunc in tracing and syscall programs
- btf: Add a new kfunc set which allows to mark a function to be
sleepable

I specified this in the cover letter, do I need to do something else
in this situation? Re-send the patches as part of my series?



>
> > +};
> > +
> > +static int __init bpf_rstat_kfunc_init(void)
> > +{
> > + return register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING,
> > + &bpf_rstat_kfunc_set);
> > +}
> > +late_initcall(bpf_rstat_kfunc_init);

\
 
 \ /
  Last update: 2022-05-20 18:12    [W:0.105 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site