lkml.org 
[lkml]   [2021]   [Nov]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[tip: x86/core] x86/csum: Fix compilation error for UM
The following commit has been merged into the x86/core branch of tip:

Commit-ID: 6b2ecb61bb106d3688b315178831ff40d1008591
Gitweb: https://git.kernel.org/tip/6b2ecb61bb106d3688b315178831ff40d1008591
Author: Eric Dumazet <edumazet@google.com>
AuthorDate: Thu, 18 Nov 2021 09:52:39 -08:00
Committer: Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 23 Nov 2021 09:45:32 +01:00

x86/csum: Fix compilation error for UM

load_unaligned_zeropad() is not yet universal.

ARCH=um SUBARCH=x86_64 builds do not have it.

When CONFIG_DCACHE_WORD_ACCESS is not set, simply continue
the bisection with 4, 2 and 1 byte steps.

Fixes: df4554cebdaa ("x86/csum: Rewrite/optimize csum_partial()")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20211118175239.1525650-1-eric.dumazet@gmail.com
---
arch/x86/lib/csum-partial_64.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)

diff --git a/arch/x86/lib/csum-partial_64.c b/arch/x86/lib/csum-partial_64.c
index 5ec3562..1eb8f2d 100644
--- a/arch/x86/lib/csum-partial_64.c
+++ b/arch/x86/lib/csum-partial_64.c
@@ -92,6 +92,7 @@ __wsum csum_partial(const void *buff, int len, __wsum sum)
buff += 8;
}
if (len & 7) {
+#ifdef CONFIG_DCACHE_WORD_ACCESS
unsigned int shift = (8 - (len & 7)) * 8;
unsigned long trail;

@@ -101,6 +102,31 @@ __wsum csum_partial(const void *buff, int len, __wsum sum)
"adcq $0,%[res]"
: [res] "+r" (temp64)
: [trail] "r" (trail));
+#else
+ if (len & 4) {
+ asm("addq %[val],%[res]\n\t"
+ "adcq $0,%[res]"
+ : [res] "+r" (temp64)
+ : [val] "r" ((u64)*(u32 *)buff)
+ : "memory");
+ buff += 4;
+ }
+ if (len & 2) {
+ asm("addq %[val],%[res]\n\t"
+ "adcq $0,%[res]"
+ : [res] "+r" (temp64)
+ : [val] "r" ((u64)*(u16 *)buff)
+ : "memory");
+ buff += 2;
+ }
+ if (len & 1) {
+ asm("addq %[val],%[res]\n\t"
+ "adcq $0,%[res]"
+ : [res] "+r" (temp64)
+ : [val] "r" ((u64)*(u8 *)buff)
+ : "memory");
+ }
+#endif
}
result = add32_with_carry(temp64 >> 32, temp64 & 0xffffffff);
if (unlikely(odd)) {
\
 
 \ /
  Last update: 2021-11-23 09:55    [W:0.086 / U:0.520 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site