lkml.org 
[lkml]   [2022]   [Aug]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH] libbpf: add GCC support for bpf_tail_call_static
Date

Hi James.

> The bpf_tail_call_static function is currently not defined unless
> using clang >= 8.
>
> To support bpf_tail_call_static on GCC we can check if __clang__ is
> not defined to enable bpf_tail_call_static.
>
> We also need to check for the GCC style __BPF__ in addition to __bpf__
> for this to work as GCC does not define __bpf__.

No need for that complication. I just pushed a patch to GCC that makes
it define __bpf__ as a target macro, in addition to __BPF__, like LLVM
does.

https://gcc.gnu.org/pipermail/gcc-patches/2022-August/600534.html

> We need to use GCC assembly syntax when the compiler does not define
> __clang__ as LLVM inline assembly is not fully compatible with GCC.
>
> Signed-off-by: James Hilliard <james.hilliard1@gmail.com>
> ---
> tools/lib/bpf/bpf_helpers.h | 19 +++++++++++++------
> 1 file changed, 13 insertions(+), 6 deletions(-)
>
> diff --git a/tools/lib/bpf/bpf_helpers.h b/tools/lib/bpf/bpf_helpers.h
> index 7349b16b8e2f..a0650b840cda 100644
> --- a/tools/lib/bpf/bpf_helpers.h
> +++ b/tools/lib/bpf/bpf_helpers.h
> @@ -131,7 +131,7 @@
> /*
> * Helper function to perform a tail call with a constant/immediate map slot.
> */
> -#if __clang_major__ >= 8 && defined(__bpf__)
> +#if (!defined(__clang__) || __clang_major__ >= 8) && (defined(__bpf__) || defined(__BPF__))
> static __always_inline void
> bpf_tail_call_static(void *ctx, const void *map, const __u32 slot)
> {
> @@ -139,8 +139,8 @@ bpf_tail_call_static(void *ctx, const void *map, const __u32 slot)
> __bpf_unreachable();
>
> /*
> - * Provide a hard guarantee that LLVM won't optimize setting r2 (map
> - * pointer) and r3 (constant map index) from _different paths_ ending
> + * Provide a hard guarantee that the compiler won't optimize setting r2
> + * (map pointer) and r3 (constant map index) from _different paths_ ending
> * up at the _same_ call insn as otherwise we won't be able to use the
> * jmpq/nopl retpoline-free patching by the x86-64 JIT in the kernel
> * given they mismatch. See also d2e4c1e6c294 ("bpf: Constant map key
> @@ -148,12 +148,19 @@ bpf_tail_call_static(void *ctx, const void *map, const __u32 slot)
> *
> * Note on clobber list: we need to stay in-line with BPF calling
> * convention, so even if we don't end up using r0, r4, r5, we need
> - * to mark them as clobber so that LLVM doesn't end up using them
> - * before / after the call.
> + * to mark them as clobber so that the compiler doesn't end up using
> + * them before / after the call.
> */
> - asm volatile("r1 = %[ctx]\n\t"
> + asm volatile(
> +#ifdef __clang__
> + "r1 = %[ctx]\n\t"
> "r2 = %[map]\n\t"
> "r3 = %[slot]\n\t"
> +#else
> + "mov %%r1,%[ctx]\n\t"
> + "mov %%r2,%[map]\n\t"
> + "mov %%r3,%[slot]\n\t"
> +#endif
> "call 12"
> :: [ctx]"r"(ctx), [map]"r"(map), [slot]"i"(slot)
> : "r0", "r1", "r2", "r3", "r4", "r5");

\
 
 \ /
  Last update: 2022-08-29 22:21    [W:0.111 / U:0.228 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site