lkml.org 
[lkml]   [2021]   [Jun]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Subject[PATCH v3 0/1] riscv: improving uaccess with logs from network bench
Date
Optimizing copy_to_user and copy_from_user.

I rewrote the functions but heavily influenced by Garry's memcpy
function [1]. It must be written in assembler to handle page faults
manually inside the function unlike other memcpy functions.

This patch will reduce cpu usage dramatically in kernel space especially
for applications which use sys-call with large buffer size, such as network
applications. The main reason behind this is that every unaligned memory
access will raise exceptions and switch between s-mode and m-mode causing
large overhead.

The motivation to create the patch was to improve network performance on
beaglev beta board. By observing with perf, the memcpy and
__asm_copy_to_user had heavy cpu usage and the network speed was limited
at around 680Mbps on 1Gbps lan. Matteo is creating the patches to improve
memcpy in C and this patch is meant to be used with them.

Typical network applications use system calls with a large buffer on
send/recv() and sendto/recvfrom() for the optimization.

The bench result, when patching only copy_user. The memcpy is without
Matteo's patches but listing the both since they are the top two largest
overhead.

All results are from the same base kernel, same rootfs and same BeagleV
beta board.

Results of iperf3 have speedup on UDP with the copy_user patch alone.

--- UDP send ---
306 Mbits/sec 362 Mbits/sec
305 Mbits/sec 362 Mbits/sec

--- UDP recv ---
772 Mbits/sec 787 Mbits/sec
773 Mbits/sec 784 Mbits/sec

Comparison by "perf top -Ue task-clock" while running iperf3.

--- TCP recv ---
* Before
40.40% [kernel] [k] memcpy
33.09% [kernel] [k] __asm_copy_to_user
* With patch
50.35% [kernel] [k] memcpy
13.76% [kernel] [k] __asm_copy_to_user

--- TCP send ---
* Before
19.96% [kernel] [k] memcpy
9.84% [kernel] [k] __asm_copy_to_user
* With patch
14.27% [kernel] [k] memcpy
7.37% [kernel] [k] __asm_copy_to_user

--- UDP recv ---
* Before
44.45% [kernel] [k] memcpy
31.04% [kernel] [k] __asm_copy_to_user
* With patch
55.62% [kernel] [k] memcpy
11.22% [kernel] [k] __asm_copy_to_user

--- UDP send ---
* Before
25.18% [kernel] [k] memcpy
22.50% [kernel] [k] __asm_copy_to_user
* With patch
28.90% [kernel] [k] memcpy
9.49% [kernel] [k] __asm_copy_to_user

---
v2 -> v3:
- Merged all patches

v1 -> v2:
- Added shift copy
- Separated patches for readability of changes in assembler
- Using perf results

[1] https://lkml.org/lkml/2021/2/16/778

Akira Tsukamoto (1):
riscv: __asm_copy_to-from_user: Optimize unaligned memory access and
pipeline stall

arch/riscv/lib/uaccess.S | 181 +++++++++++++++++++++++++++++++--------
1 file changed, 146 insertions(+), 35 deletions(-)

--
2.17.1

\
 
 \ /
  Last update: 2021-06-23 14:37    [W:0.518 / U:0.000 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site