lkml.org 
[lkml]   [2024]   [Apr]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH bpf-next v3 1/2] arm64, bpf: add internal-only MOV instruction to resolve per-CPU addrs
Date
Andrii Nakryiko <andrii.nakryiko@gmail.com> writes:

> On Fri, Apr 26, 2024 at 5:14 AM Puranjay Mohan <puranjay@kernel.org> wrote:
>>
>> From: Puranjay Mohan <puranjay12@gmail.com>
>>
>> Support an instruction for resolving absolute addresses of per-CPU
>> data from their per-CPU offsets. This instruction is internal-only and
>> users are not allowed to use them directly. They will only be used for
>> internal inlining optimizations for now between BPF verifier and BPF
>> JITs.
>>
>> Since commit 7158627686f0 ("arm64: percpu: implement optimised pcpu
>> access using tpidr_el1"), the per-cpu offset for the CPU is stored in
>> the tpidr_el1/2 register of that CPU.
>>
>> To support this BPF instruction in the ARM64 JIT, the following ARM64
>> instructions are emitted:
>>
>> mov dst, src // Move src to dst, if src != dst
>> mrs tmp, tpidr_el1/2 // Move per-cpu offset of the current cpu in tmp.
>> add dst, dst, tmp // Add the per cpu offset to the dst.
>>
>> To measure the performance improvement provided by this change, the
>> benchmark in [1] was used:
>>
>> Before:
>> glob-arr-inc : 23.597 ± 0.012M/s
>> arr-inc : 23.173 ± 0.019M/s
>> hash-inc : 12.186 ± 0.028M/s
>>
>> After:
>> glob-arr-inc : 23.819 ± 0.034M/s
>> arr-inc : 23.285 ± 0.017M/s
>
> I still expected a better improvement (global-arr-inc's results
> improved more than arr-inc, which is completely different from
> x86-64), but it's still a good thing to support this for arm64, of
> course.
>
> ack for generic parts I can understand:
>
> Acked-by: Andrii Nakryiko <andrii@kernel.org>
>

I will have to do more research to find why we don't see very high
improvement.

But this is what is happening here:

This was the complete picture before inlining:

int cpu = bpf_get_smp_processor_id();
mov x10, #0xffffffffffffd4a8
movk x10, #0x802c, lsl #16
movk x10, #0x8000, lsl #32
blr x10 ---------------------------------------> nop
nop
adrp x0, 0xffff800082128000
mrs x1, tpidr_el1
add x0, x0, #0x8
ldrsw x0, [x0, x1]
<----------------------------------------ret
add x7, x0, #0x0


Now we have:

int cpu = bpf_get_smp_processor_id();
mov x7, #0xffff8000ffffffff
movk x7, #0x8212, lsl #16
movk x7, #0x8008
mrs x10, tpidr_el1
add x7, x7, x10
ldr w7, [x7]


So, we have removed multiple instructions including a branch and a
return. I was expecting to see more improvement. This benchmark is taken
from a KVM based virtual machine, maybe if I do it on bare-metal I would
see more improvement ?

Thanks,
Puranjay

\
 
 \ /
  Last update: 2024-04-26 18:55    [W:0.052 / U:0.608 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site