lkml.org 
[lkml]   [2022]   [Oct]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 6.0 257/862] bpf: Use this_cpu_{inc_return|dec} for prog->active
    Date
    From: Hou Tao <houtao1@huawei.com>

    [ Upstream commit c89e843a11f1075d27684f6b42256213e4592383 ]

    Both __this_cpu_inc_return() and __this_cpu_dec() are not preemption
    safe and now migrate_disable() doesn't disable preemption, so the update
    of prog-active is not atomic and in theory under fully preemptible kernel
    recurisve prevention may do not work.

    Fixing by using the preemption-safe and IRQ-safe variants.

    Fixes: ca06f55b9002 ("bpf: Add per-program recursion prevention mechanism")
    Signed-off-by: Hou Tao <houtao1@huawei.com>
    Acked-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/r/20220901061938.3789460-3-houtao@huaweicloud.com
    Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
    Signed-off-by: Sasha Levin <sashal@kernel.org>
    ---
    kernel/bpf/trampoline.c | 8 ++++----
    1 file changed, 4 insertions(+), 4 deletions(-)

    diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
    index ff87e38af8a7..ad76940b02cc 100644
    --- a/kernel/bpf/trampoline.c
    +++ b/kernel/bpf/trampoline.c
    @@ -895,7 +895,7 @@ u64 notrace __bpf_prog_enter(struct bpf_prog *prog, struct bpf_tramp_run_ctx *ru

    run_ctx->saved_run_ctx = bpf_set_run_ctx(&run_ctx->run_ctx);

    - if (unlikely(__this_cpu_inc_return(*(prog->active)) != 1)) {
    + if (unlikely(this_cpu_inc_return(*(prog->active)) != 1)) {
    inc_misses_counter(prog);
    return 0;
    }
    @@ -930,7 +930,7 @@ void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start, struct bpf_tramp_
    bpf_reset_run_ctx(run_ctx->saved_run_ctx);

    update_prog_stats(prog, start);
    - __this_cpu_dec(*(prog->active));
    + this_cpu_dec(*(prog->active));
    migrate_enable();
    rcu_read_unlock();
    }
    @@ -966,7 +966,7 @@ u64 notrace __bpf_prog_enter_sleepable(struct bpf_prog *prog, struct bpf_tramp_r
    migrate_disable();
    might_fault();

    - if (unlikely(__this_cpu_inc_return(*(prog->active)) != 1)) {
    + if (unlikely(this_cpu_inc_return(*(prog->active)) != 1)) {
    inc_misses_counter(prog);
    return 0;
    }
    @@ -982,7 +982,7 @@ void notrace __bpf_prog_exit_sleepable(struct bpf_prog *prog, u64 start,
    bpf_reset_run_ctx(run_ctx->saved_run_ctx);

    update_prog_stats(prog, start);
    - __this_cpu_dec(*(prog->active));
    + this_cpu_dec(*(prog->active));
    migrate_enable();
    rcu_read_unlock_trace();
    }
    --
    2.35.1


    \
     
     \ /
      Last update: 2022-10-19 15:59    [W:2.446 / U:0.340 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site