lkml.org 
[lkml]   [2024]   [May]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH bpf] powerpc/bpf: enforce full ordering for ATOMIC operations with BPF_FETCH
Date
Michael Ellerman <mpe@ellerman.id.au> writes:

> Puranjay Mohan <puranjay@kernel.org> writes:
>> The Linux Kernel Memory Model [1][2] requires RMW operations that have a
>> return value to be fully ordered.
>>
>> BPF atomic operations with BPF_FETCH (including BPF_XCHG and
>> BPF_CMPXCHG) return a value back so they need to be JITed to fully
>> ordered operations. POWERPC currently emits relaxed operations for
>> these.
>
> Thanks for catching this.
>
>> diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
>> index 2f39c50ca729..b635e5344e8a 100644
>> --- a/arch/powerpc/net/bpf_jit_comp32.c
>> +++ b/arch/powerpc/net/bpf_jit_comp32.c
>> @@ -853,6 +853,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
>> /* Get offset into TMP_REG */
>> EMIT(PPC_RAW_LI(tmp_reg, off));
>> tmp_idx = ctx->idx * 4;
>> + /*
>> + * Enforce full ordering for operations with BPF_FETCH by emitting a 'sync'
>> + * before and after the operation.
>> + *
>> + * This is a requirement in the Linux Kernel Memory Model.
>> + * See __cmpxchg_u64() in asm/cmpxchg.h as an example.
>> + */
>> + if (imm & BPF_FETCH)
>> + EMIT(PPC_RAW_SYNC());
>> /* load value from memory into r0 */
>> EMIT(PPC_RAW_LWARX(_R0, tmp_reg, dst_reg, 0));
>>
>> @@ -905,6 +914,8 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
>>
>> /* For the BPF_FETCH variant, get old data into src_reg */
>> if (imm & BPF_FETCH) {
>> + /* Emit 'sync' to enforce full ordering */
>> + EMIT(PPC_RAW_SYNC());
>> EMIT(PPC_RAW_MR(ret_reg, ax_reg));
>> if (!fp->aux->verifier_zext)
>> EMIT(PPC_RAW_LI(ret_reg - 1, 0)); /* higher 32-bit */
>
> On 32-bit there are non-SMP systems where those syncs will probably be expensive.
>
> I think just adding an IS_ENABLED(CONFIG_SMP) around the syncs is
> probably sufficient. Christophe?

Yes, I should do it for both 32-bit and 64-bit because the kernel does
that as well:

In POWERPC __atomic_pre/post_full_fence resolves to 'sync' in case of
CONFIG_SMP and barrier() in case of !CONFIG_SMP.

barrier() is not relevant for JITs as it is used at compile time.

So, I will use

if (IS_ENABLED(CONFIG_SMP))
EMIT(PPC_RAW_SYNC());

in the next version.


Thanks,
Puranjay

\
 
 \ /
  Last update: 2024-05-08 13:39    [W:0.043 / U:0.764 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site