lkml.org 
[lkml]   [2023]   [Aug]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH bpf-next v3 0/3] bpf, riscv: use BPF prog pack allocator in BPF JIT
From
On Wed, 30 Aug 2023 06:59:13 PDT (-0700), bjorn@kernel.org wrote:
> Daniel Borkmann <daniel@iogearbox.net> writes:
>
>> On 8/29/23 12:06 PM, Björn Töpel wrote:
>>> Puranjay Mohan <puranjay12@gmail.com> writes:
>>>
>>>> Changes in v2 -> v3:
>>>> 1. Fix maximum width of code in patches from 80 to 100. [All patches]
>>>> 2. Add checks for ctx->ro_insns == NULL. [Patch 3]
>>>> 3. Fix check for edge condition where amount of text to set > 2 * pagesize
>>>> [Patch 1 and 2]
>>>> 4. Add reviewed-by in patches.
>>>> 5. Adding results of selftest here:
>>>> Using the command: ./test_progs on qemu
>>>> Without the series: Summary: 336/3162 PASSED, 56 SKIPPED, 90 FAILED
>>>> With this series: Summary: 336/3162 PASSED, 56 SKIPPED, 90 FAILED
>>>>
>>>> Changes in v1 -> v2:
>>>> 1. Implement a new function patch_text_set_nosync() to be used in bpf_arch_text_invalidate().
>>>> The implementation in v1 called patch_text_nosync() in a loop and it was bad as it would
>>>> call flush_icache_range() for every word making it really slow. This was found by running
>>>> the test_tag selftest which would take forever to complete.
>>>>
>>>> Here is some data to prove the V2 fixes the problem:
>>>>
>>>> Without this series:
>>>> root@rv-selftester:~/src/kselftest/bpf# time ./test_tag
>>>> test_tag: OK (40945 tests)
>>>>
>>>> real 7m47.562s
>>>> user 0m24.145s
>>>> sys 6m37.064s
>>>>
>>>> With this series applied:
>>>> root@rv-selftester:~/src/selftest/bpf# time ./test_tag
>>>> test_tag: OK (40945 tests)
>>>>
>>>> real 7m29.472s
>>>> user 0m25.865s
>>>> sys 6m18.401s
>>>>
>>>> BPF programs currently consume a page each on RISCV. For systems with many BPF
>>>> programs, this adds significant pressure to instruction TLB. High iTLB pressure
>>>> usually causes slow down for the whole system.
>>>>
>>>> Song Liu introduced the BPF prog pack allocator[1] to mitigate the above issue.
>>>> It packs multiple BPF programs into a single huge page. It is currently only
>>>> enabled for the x86_64 BPF JIT.
>>>>
>>>> I enabled this allocator on the ARM64 BPF JIT[2]. It is being reviewed now.
>>>>
>>>> This patch series enables the BPF prog pack allocator for the RISCV BPF JIT.
>>>> This series needs a patch[3] from the ARM64 series to work.
>>>>
>>>> ======================================================
>>>> Performance Analysis of prog pack allocator on RISCV64
>>>> ======================================================
>>>>
>>>> Test setup:
>>>> ===========
>>>>
>>>> Host machine: Debian GNU/Linux 11 (bullseye)
>>>> Qemu Version: QEMU emulator version 8.0.3 (Debian 1:8.0.3+dfsg-1)
>>>> u-boot-qemu Version: 2023.07+dfsg-1
>>>> opensbi Version: 1.3-1
>>>>
>>>> To test the performance of the BPF prog pack allocator on RV, a stresser
>>>> tool[4] linked below was built. This tool loads 8 BPF programs on the system and
>>>> triggers 5 of them in an infinite loop by doing system calls.
>>>>
>>>> The runner script starts 20 instances of the above which loads 8*20=160 BPF
>>>> programs on the system, 5*20=100 of which are being constantly triggered.
>>>> The script is passed a command which would be run in the above environment.
>>>>
>>>> The script was run with following perf command:
>>>> ./run.sh "perf stat -a \
>>>> -e iTLB-load-misses \
>>>> -e dTLB-load-misses \
>>>> -e dTLB-store-misses \
>>>> -e instructions \
>>>> --timeout 60000"
>>>>
>>>> The output of the above command is discussed below before and after enabling the
>>>> BPF prog pack allocator.
>>>>
>>>> The tests were run on qemu-system-riscv64 with 8 cpus, 16G memory. The rootfs
>>>> was created using Bjorn's riscv-cross-builder[5] docker container linked below.
>>>>
>>>> Results
>>>> =======
>>>>
>>>> Before enabling prog pack allocator:
>>>> ------------------------------------
>>>>
>>>> Performance counter stats for 'system wide':
>>>>
>>>> 4939048 iTLB-load-misses
>>>> 5468689 dTLB-load-misses
>>>> 465234 dTLB-store-misses
>>>> 1441082097998 instructions
>>>>
>>>> 60.045791200 seconds time elapsed
>>>>
>>>> After enabling prog pack allocator:
>>>> -----------------------------------
>>>>
>>>> Performance counter stats for 'system wide':
>>>>
>>>> 3430035 iTLB-load-misses
>>>> 5008745 dTLB-load-misses
>>>> 409944 dTLB-store-misses
>>>> 1441535637988 instructions
>>>>
>>>> 60.046296600 seconds time elapsed
>>>>
>>>> Improvements in metrics
>>>> =======================
>>>>
>>>> It was expected that the iTLB-load-misses would decrease as now a single huge
>>>> page is used to keep all the BPF programs compared to a single page for each
>>>> program earlier.
>>>>
>>>> --------------------------------------------
>>>> The improvement in iTLB-load-misses: -30.5 %
>>>> --------------------------------------------
>>>>
>>>> I repeated this expriment more than 100 times in different setups and the
>>>> improvement was always greater than 30%.
>>>>
>>>> This patch series is boot tested on the Starfive VisionFive 2 board[6].
>>>> The performance analysis was not done on the board because it doesn't
>>>> expose iTLB-load-misses, etc. The stresser program was run on the board to test
>>>> the loading and unloading of BPF programs
>>>>
>>>> [1] https://lore.kernel.org/bpf/20220204185742.271030-1-song@kernel.org/
>>>> [2] https://lore.kernel.org/all/20230626085811.3192402-1-puranjay12@gmail.com/
>>>> [3] https://lore.kernel.org/all/20230626085811.3192402-2-puranjay12@gmail.com/
>>>> [4] https://github.com/puranjaymohan/BPF-Allocator-Bench
>>>> [5] https://github.com/bjoto/riscv-cross-builder
>>>> [6] https://www.starfivetech.com/en/site/boards
>>>>
>>>> Puranjay Mohan (3):
>>>> riscv: extend patch_text_nosync() for multiple pages
>>>> riscv: implement a memset like function for text
>>>> bpf, riscv: use prog pack allocator in the BPF JIT
>>>
>>> Thank you! For the series:
>>>
>>> Acked-by: Björn Töpel <bjorn@kernel.org>
>>> Tested-by: Björn Töpel <bjorn@rivosinc.com>
>>>
>>> @Alexei @Daniel This series depends on a core BPF patch from the Arm
>>> series [3].
>
> [snip]
>> If not yet, perhaps you could ship this series along with your PR to Linus
>> during this merge window given the big net PR (incl. bpf) was already merged
>> yesterday. So from our side only fixes ship to Linus.
>
> Are you OK with this patch going thru the riscv tree as well?

I'm generally fine taking almost anything, as long as whomever usually
takes them acks it.

\
 
 \ /
  Last update: 2023-08-30 22:47    [W:0.115 / U:0.184 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site