Messages in this thread Patch in this message | | | Date | Tue, 5 May 2020 18:14:38 +0300 | From | Andy Shevchenko <> | Subject | Re: [PATCH v6 1/2] x86: fix bitops.h warning with a moved cast |
| |
On Mon, May 04, 2020 at 06:14:43PM -0700, Jesse Brandeburg wrote: > On Mon, 4 May 2020 12:51:12 -0700 > Nick Desaulniers <ndesaulniers@google.com> wrote: > > > Sorry for the very late report. It turns out that if your config > > tickles __builtin_constant_p just right, this now produces invalid > > assembly: > > > > $ cat foo.c > > long a(long b, long c) { > > asm("orb\t%1, %0" : "+q"(c): "r"(b)); > > return c; > > } > > $ gcc foo.c > > foo.c: Assembler messages: > > foo.c:2: Error: `%rax' not allowed with `orb' > > > > The "q" constraint only has meanting on -m32 otherwise is treated as > > "r". > > > > Since we have the mask (& 0xff), can we drop the `b` suffix from the > > instruction? Or is a revert more appropriate? Or maybe another way to > > skin this cat? > > Figures that such a small change can create problems :-( Sorry for the > thrash! > > The patches in the link below basically add back the cast, but I'm > interested to see if any others can come up with a better fix that > a) passes the above code generation test > b) still keeps sparse happy > c) passes the test module and the code inspection > > If need be I'm OK with a revert of the original patch to fix the issue > in the short term, but it seems to me there must be a way to satisfy > both tools. We went through several iterations on the way to the final > patch that we might be able to pluck something useful from.
For me the below seems to work:
diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h index b392571c1f1d1..139122e5b25b1 100644 --- a/arch/x86/include/asm/bitops.h +++ b/arch/x86/include/asm/bitops.h @@ -54,7 +54,7 @@ arch_set_bit(long nr, volatile unsigned long *addr) if (__builtin_constant_p(nr)) { asm volatile(LOCK_PREFIX "orb %1,%0" : CONST_MASK_ADDR(nr, addr) - : "iq" (CONST_MASK(nr) & 0xff) + : "iq" ((u8)(CONST_MASK(nr) & 0xff)) : "memory"); } else { asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0" @@ -74,7 +74,7 @@ arch_clear_bit(long nr, volatile unsigned long *addr) if (__builtin_constant_p(nr)) { asm volatile(LOCK_PREFIX "andb %1,%0" : CONST_MASK_ADDR(nr, addr) - : "iq" (CONST_MASK(nr) ^ 0xff)); + : "iq" ((u8)(CONST_MASK(nr) ^ 0xff))); } else { asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0" : : RLONG_ADDR(addr), "Ir" (nr) : "memory");
-- With Best Regards, Andy Shevchenko
| |