lkml.org 
[lkml]   [2021]   [Jul]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2 0/3] lib/string: optimized mem* functions
On Fri,  2 Jul 2021 14:31:50 +0200 Matteo Croce <mcroce@linux.microsoft.com> wrote:

> From: Matteo Croce <mcroce@microsoft.com>
>
> Rewrite the generic mem{cpy,move,set} so that memory is accessed with
> the widest size possible, but without doing unaligned accesses.
>
> This was originally posted as C string functions for RISC-V[1], but as
> there was no specific RISC-V code, it was proposed for the generic
> lib/string.c implementation.
>
> Tested on RISC-V and on x86_64 by undefining __HAVE_ARCH_MEM{CPY,SET,MOVE}
> and HAVE_EFFICIENT_UNALIGNED_ACCESS.
>
> These are the performances of memcpy() and memset() of a RISC-V machine
> on a 32 mbyte buffer:
>
> memcpy:
> original aligned: 75 Mb/s
> original unaligned: 75 Mb/s
> new aligned: 114 Mb/s
> new unaligned: 107 Mb/s
>
> memset:
> original aligned: 140 Mb/s
> original unaligned: 140 Mb/s
> new aligned: 241 Mb/s
> new unaligned: 241 Mb/s

Did you record the x86_64 performance?


Which other architectures are affected by this change?

\
 
 \ /
  Last update: 2021-07-10 23:31    [W:0.098 / U:0.100 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site