Messages in this thread | | | Date | Tue, 2 Apr 2019 09:26:59 +0200 | From | Peter Zijlstra <> | Subject | Re: [PATCH] x86/asm: use memory clobber in bitops that touch arbitrary memory |
| |
On Mon, Apr 01, 2019 at 06:24:08PM +0200, Alexander Potapenko wrote: > diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h > index d153d570bb04..20e4950827d9 100644 > --- a/arch/x86/include/asm/bitops.h > +++ b/arch/x86/include/asm/bitops.h > @@ -111,7 +111,7 @@ clear_bit(long nr, volatile unsigned long *addr) > } else { > asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0" > : BITOP_ADDR(addr) > - : "Ir" (nr)); > + : "Ir" (nr) : "memory"); > } > }
clear_bit() doesn't have a return value, so why are we now still using "+m" output ?
AFAICT the only reason we did that was to clobber the variable, which you've (afaiu correctly) argued to be incorrect.
So whould we not write this as:
asm volatile (LOCK_PREFIX __ASM_SIZE(btr) " %[nr], %[addr]" : : [addr] "m" (*addr), [nr] "Ir" (nr) : "memory");
?
And the very same for _all_ other sites touched in this patch.
| |