lkml.org 
[lkml]   [2019]   [Apr]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] x86/asm: use memory clobber in bitops that touch arbitrary memory


On Mon, Apr 01, 2019 at 06:24:08PM +0200, Alexander Potapenko wrote:
> diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
> index d153d570bb04..20e4950827d9 100644
> --- a/arch/x86/include/asm/bitops.h
> +++ b/arch/x86/include/asm/bitops.h
> @@ -111,7 +111,7 @@ clear_bit(long nr, volatile unsigned long *addr)
> } else {
> asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0"
> : BITOP_ADDR(addr)
> - : "Ir" (nr));
> + : "Ir" (nr) : "memory");
> }
> }

clear_bit() doesn't have a return value, so why are we now still using
"+m" output ?

AFAICT the only reason we did that was to clobber the variable, which
you've (afaiu correctly) argued to be incorrect.

So whould we not write this as:

asm volatile (LOCK_PREFIX __ASM_SIZE(btr) " %[nr], %[addr]"
: : [addr] "m" (*addr), [nr] "Ir" (nr)
: "memory");

?

And the very same for _all_ other sites touched in this patch.

\
 
 \ /
  Last update: 2019-04-02 09:28    [W:0.098 / U:0.584 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site