lkml.org 
[lkml]   [2021]   [Sep]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRe: [PATCH v5 bpf-next 2/3] bpf: introduce helper bpf_get_branch_snapshot
    Date


    > On Sep 2, 2021, at 1:56 PM, John Fastabend <john.fastabend@gmail.com> wrote:
    >
    > Song Liu wrote:
    >> Introduce bpf_get_branch_snapshot(), which allows tracing pogram to get
    >> branch trace from hardware (e.g. Intel LBR). To use the feature, the
    >> user need to create perf_event with proper branch_record filtering
    >> on each cpu, and then calls bpf_get_branch_snapshot in the bpf function.
    >> On Intel CPUs, VLBR event (raw event 0x1b00) can be use for this.
    >>
    >> Signed-off-by: Song Liu <songliubraving@fb.com>
    >> ---
    >
    > [...]
    >
    >>
    >> +BPF_CALL_3(bpf_get_branch_snapshot, void *, buf, u32, size, u64, flags)
    >> +{
    >> +#ifndef CONFIG_X86
    >> + return -ENOENT;
    >> +#else
    >> + static const u32 br_entry_size = sizeof(struct perf_branch_entry);
    >> + u32 entry_cnt = size / br_entry_size;
    >> +
    >> + if (unlikely(flags))
    >> + return -EINVAL;
    >> +
    >> + if (!buf || (size % br_entry_size != 0))
    >> + return -EINVAL;
    >
    > LGTM, but why fail if buffer is slightly larger than expected? I guess its a slightly
    > buggy program that would do this, but not actually harmful right?

    This check was added because bpf_read_branch_records() has a similar check.
    I guess it is OK either way.

    >
    > Acked-by: John Fastabend <john.fastabend@gmail.com>

    Thanks for the review!

    \
     
     \ /
      Last update: 2021-09-03 00:05    [W:2.425 / U:0.068 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site