Messages in this thread Patch in this message | | | From | Noah Goldstein <> | Subject | [PATCH v4] arch/x86: Improve 'rep movs{b|q}' usage in memmove_64.S | Date | Wed, 17 Nov 2021 15:02:45 -0600 |
| |
Add check for "short distance movsb" for forwards FSRM usage and entirely remove backwards 'rep movsq'. Both of these usages hit "slow modes" that are an order of magnitude slower than usual.
'rep movsb' has some noticeable VERY slow modes that the current implementation is either 1) not checking for or 2) intentionally using.
All times are in cycles and measuring the throughput of copying 1024 bytes.
1. For FSRM, when 'dst - src' is in (1, 63] or (4GB, 4GB + 63] it is an order of magnitude slower than normal and much slower than a 4x 'movq' loop.
FSRM forward (dst - src == 32) -> 1113.156 FSRM forward (dst - src == 64) -> 120.669
ERMS forward (dst - src == 32) -> 209.326 ERMS forward (dst - src == 64) -> 118.22
2. For both FSRM and ERMS backwards 'rep movsb' is always slow. Both of the times below are with dst % 256 == src % 256 which mirrors the usage of the previous implementation.
FSRM backward -> 1196.039 ERMS backward -> 1191.873
As a reference this is how a 4x 'movq' performances:
4x Forward (dst - src == 32) -> 128.273 4x Backward -> 130.183
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com> --- arch/x86/lib/memmove_64.S | 38 +++++++++++++------------------------- 1 file changed, 13 insertions(+), 25 deletions(-)
diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S index 64801010d312..910b963388b1 100644 --- a/arch/x86/lib/memmove_64.S +++ b/arch/x86/lib/memmove_64.S @@ -28,6 +28,8 @@ SYM_FUNC_START_WEAK(memmove) SYM_FUNC_START(__memmove) mov %rdi, %rax + cmp $0x20, %rdx + jb 1f /* Decide forward/backward copy mode */ cmp %rdi, %rsi @@ -39,7 +41,17 @@ SYM_FUNC_START(__memmove) /* FSRM implies ERMS => no length checks, do the copy directly */ .Lmemmove_begin_forward: - ALTERNATIVE "cmp $0x20, %rdx; jb 1f", "", X86_FEATURE_FSRM + /* + * Don't use FSRM 'rep movsb' if 'dst - src' in (0, 63] or (4GB, 4GB + + * 63]. It hits a slow case which is an order of magnitude slower. + */ + ALTERNATIVE " \ + mov %edi, %ecx; \ + sub %esi, %ecx; \ + cmp $63, %ecx; \ + jbe 3f \ + ", "", X86_FEATURE_FSRM + ALTERNATIVE "", "movq %rdx, %rcx; rep movsb; retq", X86_FEATURE_ERMS /* @@ -89,35 +101,11 @@ SYM_FUNC_START(__memmove) jmp 13f .Lmemmove_end_forward: - /* - * Handle data backward by movsq. - */ - .p2align 4 -7: - movq %rdx, %rcx - movq (%rsi), %r11 - movq %rdi, %r10 - leaq -8(%rsi, %rdx), %rsi - leaq -8(%rdi, %rdx), %rdi - shrq $3, %rcx - std - rep movsq - cld - movq %r11, (%r10) - jmp 13f - /* * Start to prepare for backward copy. */ .p2align 4 2: - cmp $0x20, %rdx - jb 1f - cmp $680, %rdx - jb 6f - cmp %dil, %sil - je 7b -6: /* * Calculate copy position to tail. */ -- 2.25.1
| |