lkml.org 
[lkml]   [2022]   [May]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Date
    SubjectRe: [PATCH bpf-next 1/2] bpf, test_run: Add PROG_TEST_RUN support to kprobe
    On Sun, May 29, 2022 at 3:06 PM Daniel Xu <dxu@dxuuu.xyz> wrote:
    >
    > This commit adds PROG_TEST_RUN support to BPF_PROG_TYPE_KPROBE progs. On
    > top of being generally useful for unit testing kprobe progs, this commit
    > more specifically helps solve a relability problem with bpftrace BEGIN
    > and END probes.
    >
    > BEGIN and END probes are run exactly once at the beginning and end of a
    > bpftrace tracing session, respectively. bpftrace currently implements
    > the probes by creating two dummy functions and attaching the BEGIN and
    > END progs, if defined, to those functions and calling the dummy
    > functions as appropriate. This works pretty well most of the time except
    > for when distros strip symbols from bpftrace. Every now and then this
    > happens and users get confused. Having PROG_TEST_RUN support will help
    > solve this issue by allowing us to directly trigger uprobes from
    > userspace.
    >
    > Admittedly, this is a pretty specific problem and could probably be
    > solved other ways. However, PROG_TEST_RUN also makes unit testing more
    > convenient, especially as users start building more complex tracing
    > applications. So I see this as killing two birds with one stone.
    >
    > Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
    > ---
    > include/linux/bpf.h | 10 ++++++++++
    > kernel/trace/bpf_trace.c | 1 +
    > net/bpf/test_run.c | 36 ++++++++++++++++++++++++++++++++++++
    > 3 files changed, 47 insertions(+)
    >
    > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
    > index 2b914a56a2c5..dec3082ee158 100644
    > --- a/include/linux/bpf.h
    > +++ b/include/linux/bpf.h
    > @@ -1751,6 +1751,9 @@ int bpf_prog_test_run_raw_tp(struct bpf_prog *prog,
    > int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog,
    > const union bpf_attr *kattr,
    > union bpf_attr __user *uattr);
    > +int bpf_prog_test_run_kprobe(struct bpf_prog *prog,
    > + const union bpf_attr *kattr,
    > + union bpf_attr __user *uattr);
    > bool btf_ctx_access(int off, int size, enum bpf_access_type type,
    > const struct bpf_prog *prog,
    > struct bpf_insn_access_aux *info);
    > @@ -1998,6 +2001,13 @@ static inline int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog,
    > return -ENOTSUPP;
    > }
    >
    > +static inline int bpf_prog_test_run_kprobe(struct bpf_prog *prog,
    > + const union bpf_attr *kattr,
    > + union bpf_attr __user *uattr)
    > +{
    > + return -ENOTSUPP;
    > +}

    As the kernel test bot reported, this is not enough to cover all
    different configs. We can
    follow the pattern with bpf_prog_test_run_tracing().

    Otherwise, this looks good to me.

    Song

    > +
    > static inline void bpf_map_put(struct bpf_map *map)
    > {
    > }
    > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
    > index 10b157a6d73e..b452e84b9ba4 100644
    > --- a/kernel/trace/bpf_trace.c
    > +++ b/kernel/trace/bpf_trace.c
    > @@ -1363,6 +1363,7 @@ const struct bpf_verifier_ops kprobe_verifier_ops = {
    > };
    >
    > const struct bpf_prog_ops kprobe_prog_ops = {
    > + .test_run = bpf_prog_test_run_kprobe,
    > };
    >
    > BPF_CALL_5(bpf_perf_event_output_tp, void *, tp_buff, struct bpf_map *, map,
    > diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
    > index 56f059b3c242..0b6fc17ce901 100644
    > --- a/net/bpf/test_run.c
    > +++ b/net/bpf/test_run.c
    > @@ -1622,6 +1622,42 @@ int bpf_prog_test_run_syscall(struct bpf_prog *prog,
    > return err;
    > }
    >
    > +int bpf_prog_test_run_kprobe(struct bpf_prog *prog,
    > + const union bpf_attr *kattr,
    > + union bpf_attr __user *uattr)
    > +{
    > + void __user *ctx_in = u64_to_user_ptr(kattr->test.ctx_in);
    > + __u32 ctx_size_in = kattr->test.ctx_size_in;
    > + u32 repeat = kattr->test.repeat;
    > + struct pt_regs *ctx = NULL;
    > + u32 retval, duration;
    > + int err = 0;
    > +
    > + if (kattr->test.data_in || kattr->test.data_out ||
    > + kattr->test.ctx_out || kattr->test.flags ||
    > + kattr->test.cpu || kattr->test.batch_size)
    > + return -EINVAL;
    > +
    > + if (ctx_size_in != sizeof(struct pt_regs))
    > + return -EINVAL;
    > +
    > + ctx = memdup_user(ctx_in, ctx_size_in);
    > + if (IS_ERR(ctx))
    > + return PTR_ERR(ctx);
    > +
    > + err = bpf_test_run(prog, ctx, repeat, &retval, &duration, false);
    > + if (err)
    > + goto out;
    > +
    > + if (copy_to_user(&uattr->test.retval, &retval, sizeof(retval)) ||
    > + copy_to_user(&uattr->test.duration, &duration, sizeof(duration))) {
    > + err = -EFAULT;
    > + }
    > +out:
    > + kfree(ctx);
    > + return err;
    > +}
    > +
    > static const struct btf_kfunc_id_set bpf_prog_test_kfunc_set = {
    > .owner = THIS_MODULE,
    > .check_set = &test_sk_check_kfunc_ids,
    > --
    > 2.36.1
    >

    \
     
     \ /
      Last update: 2022-05-31 19:05    [W:2.802 / U:0.108 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site