lkml.org 
[lkml]   [2020]   [Apr]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH] x86/memcpy: Introduce memcpy_mcsafe_fast
On Fri, Apr 17, 2020 at 5:12 PM Dan Williams <dan.j.williams@intel.com> wrote:
>
> > @@ -106,12 +108,10 @@ static __always_inline __must_check unsigned long
> > memcpy_mcsafe(void *dst, const void *src, size_t cnt)
> > {
> > #ifdef CONFIG_X86_MCE
> > - if (static_branch_unlikely(&mcsafe_key))
> > - return __memcpy_mcsafe(dst, src, cnt);
> > - else
> > + if (static_branch_unlikely(&mcsafe_slow_key))
> > + return memcpy_mcsafe_slow(dst, src, cnt);
> > #endif
> > - memcpy(dst, src, cnt);
> > - return 0;
> > + return memcpy_mcsafe_fast(dst, src, cnt);
> > }

It strikes me that I see no advantages to making this an inline function at all.

Even for the good case - where it turns into just a memcpy because MCE
is entirely disabled - it doesn't seem to matter.

The only case that really helps is when the memcpy can be turned into
a single access. Which - and I checked - does exist, with people doing

r = memcpy_mcsafe(&sb_seq_count, &sb(wc)->seq_count, sizeof(uint64_t));

to read a single 64-bit field which looks aligned to me.

But that code is incredible garbage anyway, since even on a broken
machine, there's no actual reason to use the slow variant for that
whole access that I can tell. The macs-safe copy routines do not do
anything worthwhile for a single access.

So my reaction remains that a lot of this is just completely wrong and
incredibly mis-designed.

Yes, the hardware was buggy garbage. But why should we make things
worse with making the software be incomprehensibly bad too?

Linus

\
 
 \ /
  Last update: 2020-04-18 21:43    [W:0.046 / U:2.456 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site