lkml.org 
[lkml]   [2021]   [Apr]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] asm-generic: atomic64: handle ARCH_ATOMIC builds (was "Re: [PATCH v3 1/2] locking/atomics: Fixup GENERIC_ATOMIC64 conflict") with atomic-arch-fallback.h
On Thu, Apr 22, 2021 at 09:12:19PM +0800, Guo Ren wrote:
> On Thu, Apr 22, 2021 at 6:59 PM Mark Rutland <mark.rutland@arm.com> wrote:
> >
> > Hi Guo,
> >
> > On Wed, Apr 21, 2021 at 12:53:45PM +0000, guoren@kernel.org wrote:
> > > From: Guo Ren <guoren@linux.alibaba.com>
> > >
> > > Current GENERIC_ATOMIC64 in atomic-arch-fallback.h is broken. When a 32-bit
> > > arch use atomic-arch-fallback.h will cause compile error.
> > >
> > > In file included from include/linux/atomic.h:81,
> > > from include/linux/rcupdate.h:25,
> > > from include/linux/rculist.h:11,
> > > from include/linux/pid.h:5,
> > > from include/linux/sched.h:14,
> > > from arch/riscv/kernel/asm-offsets.c:10:
> > > include/linux/atomic-arch-fallback.h: In function 'arch_atomic64_inc':
> > > >> include/linux/atomic-arch-fallback.h:1447:2: error: implicit declaration of function 'arch_atomic64_add'; did you mean 'arch_atomic_add'? [-Werror=implicit-function-declaration]
> > > 1447 | arch_atomic64_add(1, v);
> > > | ^~~~~~~~~~~~~~~~~
> > > | arch_atomic_add
> >
> > This is expected; GENERIC_ATOMIC64 doesn't implement arch_atomic64_*(),
> > and thus violates the expectations of the fallback code.
> >
> > To make GENERIC_ATOMIC64 compatible with ARCH_ATOMIC, the
> > GENERIC_ATOMIC64 implementation *must* provide arch_atomic64_*()
> > functions.
> How do you let a "static __always_inline" of
> "asm-generic/atomic-instrumented.h" call a real function? See
> lib/atomic64.c.

Can you rephrase the question? I don't understand what you're asking
here.

If you're asking about how the calls are directed to
generic_atomic64_*(), the atomic-instrumented atomic64_<foo>() function
will try to call arch_atomic64_<foo>(), and the pre-processor
definitions in asm-generic/atomic64.h will direct that to
generic_atomic64_<foo>().

If you're asking about inlining specifically, I'm afraid I don't
understand. It's legitimate for a static __always_inline function A to
call a non-inlined function B, and this works just fine. In that case, A
will get inlined into its caller, and B will not, but nothing stops A
from calling B.

> > > The atomic-arch-fallback.h & atomic-fallback.h &
> > > atomic-instrumented.h are generated by gen-atomic-fallback.sh &
> > > gen-atomic-instrumented.sh, so just take care the bash files.
> > >
> > > Remove the dependency of atomic-*-fallback.h in atomic64.h.
> >
> > Please don't duplicate the fallbacks; this'll make it harder to move
> > other over and eventually remove the non-ARCH_ATOMIC implementations.
> >
> > Does the patch below make things work for you, or have I missed
> > something?
> RISC-V combines 32bit & 64bit together just like x86. Current
> ARCH_ATOMIC could work perfectly with RV64, but not RV32.
>
> RV32 still could use ARCH_ATOMIC to improve kasan check.

I understand that (and I want riscv to use ARCH_ATOMIC), but that
doesn't answer my question.

I went and built this locally, starting with v5.12-rc8, applying my
patch, then applying your second patch atop. Both defconfig and
rv32_defconfig build just fine, though I wasn't able to check KASAN with
GCC 10.1.0.

Is there a problem that I've missed, or does my patch work?

Thanks,
Mark.

> > I've given this a basic build test on an arm config using
> > GENERIC_ATOMIC64 (but not ARCH_ATOMIC).
> >
> > Thanks,
> > Mark.
> > ---->8----
> > From 7f0389c8a1f41ecb5b2700f6ba38ff2ba093eb33 Mon Sep 17 00:00:00 2001
> > From: Mark Rutland <mark.rutland@arm.com>
> > Date: Thu, 22 Apr 2021 11:26:04 +0100
> > Subject: [PATCH] asm-generic: atomic64: handle ARCH_ATOMIC builds
> >
> > We'd like all architectures to convert to ARCH_ATOMIC, as this will
> > enable functionality, and once all architectures are converted it will
> > be possible to make significant cleanups to the atomic headers.
> >
> > A number of architectures use GENERIC_ATOMIC64, and it's impractical to
> > convert them all in one go. To make it possible to convert them
> > one-by-one, let's make the GENERIC_ATOMIC64 implementation function as
> > either atomic64_*() or arch_atomic64_*() depending on whether
> > ARCH_ATOMIC is selected. To do this, the C implementations are prefixed
> > as generic_atomic64_*(), and the asm-generic/atomic64.h header maps
> > atomic64_*()/arch_atomic64_*() onto these as appropriate via teh
> > preprocessor.
> >
> > Once all users are moved over to ARCH_ATOMIC the ifdeffery in the header
> > can be simplified and/or removed entirely.
> >
> > For existing users (none of which select ARCH_ATOMIC), there should be
> > no functional change as a result of this patch.
> >
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Arnd Bergmann <arnd@arndb.de>
> > Cc: Guo Ren <guoren@linux.alibaba.com>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > ---
> > include/asm-generic/atomic64.h | 74 ++++++++++++++++++++++++++++++++++--------
> > lib/atomic64.c | 36 ++++++++++----------
> > 2 files changed, 79 insertions(+), 31 deletions(-)
> >
> > diff --git a/include/asm-generic/atomic64.h b/include/asm-generic/atomic64.h
> > index 370f01d4450f..45c7ff8c9477 100644
> > --- a/include/asm-generic/atomic64.h
> > +++ b/include/asm-generic/atomic64.h
> > @@ -15,19 +15,17 @@ typedef struct {
> >
> > #define ATOMIC64_INIT(i) { (i) }
> >
> > -extern s64 atomic64_read(const atomic64_t *v);
> > -extern void atomic64_set(atomic64_t *v, s64 i);
> > -
> > -#define atomic64_set_release(v, i) atomic64_set((v), (i))
> > +extern s64 generic_atomic64_read(const atomic64_t *v);
> > +extern void generic_atomic64_set(atomic64_t *v, s64 i);
> >
> > #define ATOMIC64_OP(op) \
> > -extern void atomic64_##op(s64 a, atomic64_t *v);
> > +extern void generic_atomic64_##op(s64 a, atomic64_t *v);
> >
> > #define ATOMIC64_OP_RETURN(op) \
> > -extern s64 atomic64_##op##_return(s64 a, atomic64_t *v);
> > +extern s64 generic_atomic64_##op##_return(s64 a, atomic64_t *v);
> >
> > #define ATOMIC64_FETCH_OP(op) \
> > -extern s64 atomic64_fetch_##op(s64 a, atomic64_t *v);
> > +extern s64 generic_atomic64_fetch_##op(s64 a, atomic64_t *v);
> >
> > #define ATOMIC64_OPS(op) ATOMIC64_OP(op) ATOMIC64_OP_RETURN(op) ATOMIC64_FETCH_OP(op)
> >
> > @@ -46,11 +44,61 @@ ATOMIC64_OPS(xor)
> > #undef ATOMIC64_OP_RETURN
> > #undef ATOMIC64_OP
> >
> > -extern s64 atomic64_dec_if_positive(atomic64_t *v);
> > -#define atomic64_dec_if_positive atomic64_dec_if_positive
> > -extern s64 atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n);
> > -extern s64 atomic64_xchg(atomic64_t *v, s64 new);
> > -extern s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u);
> > -#define atomic64_fetch_add_unless atomic64_fetch_add_unless
> > +extern s64 generic_atomic64_dec_if_positive(atomic64_t *v);
> > +extern s64 generic_atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n);
> > +extern s64 generic_atomic64_xchg(atomic64_t *v, s64 new);
> > +extern s64 generic_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u);
> > +
> > +#ifdef ARCH_ATOMIC
> > +
> > +#define arch_atomic64_read generic_atomic64_read
> > +#define arch_atomic64_set generic_atomic64_set
> > +#define arch_atomic64_set_release generic_atomic64_set
> > +
> > +#define arch_atomic64_add generic_atomic64_add
> > +#define arch_atomic64_add_return generic_atomic64_add_return
> > +#define arch_atomic64_fetch_add generic_atomic64_fetch_add
> > +#define arch_atomic64_sub generic_atomic64_sub
> > +#define arch_atomic64_sub_return generic_atomic64_sub_return
> > +#define arch_atomic64_fetch_sub generic_atomic64_fetch_sub
> > +
> > +#define arch_atomic64_and generic_atomic64_and
> > +#define arch_atomic64_fetch_and generic_atomic64_fetch_and
> > +#define arch_atomic64_or generic_atomic64_or
> > +#define arch_atomic64_fetch_or generic_atomic64_fetch_or
> > +#define arch_atomic64_xor generic_atomic64_xor
> > +#define arch_atomic64_fetch_xor generic_atomic64_fetch_xor
> > +
> > +#define arch_atomic64_dec_if_positive generic_atomic64_dec_if_positive
> > +#define arch_atomic64_cmpxchg generic_atomic64_cmpxchg
> > +#define arch_atomic64_xchg generic_atomic64_xchg
> > +#define arch_atomic64_fetch_add_unless generic_atomic64_fetch_add_unless
> > +
> > +#else /* ARCH_ATOMIC */
> > +
> > +#define atomic64_read generic_atomic64_read
> > +#define atomic64_set generic_atomic64_set
> > +#define atomic64_set_release generic_atomic64_set
> > +
> > +#define atomic64_add generic_atomic64_add
> > +#define atomic64_add_return generic_atomic64_add_return
> > +#define atomic64_fetch_add generic_atomic64_fetch_add
> > +#define atomic64_sub generic_atomic64_sub
> > +#define atomic64_sub_return generic_atomic64_sub_return
> > +#define atomic64_fetch_sub generic_atomic64_fetch_sub
> > +
> > +#define atomic64_and generic_atomic64_and
> > +#define atomic64_fetch_and generic_atomic64_fetch_and
> > +#define atomic64_or generic_atomic64_or
> > +#define atomic64_fetch_or generic_atomic64_fetch_or
> > +#define atomic64_xor generic_atomic64_xor
> > +#define atomic64_fetch_xor generic_atomic64_fetch_xor
> > +
> > +#define atomic64_dec_if_positive generic_atomic64_dec_if_positive
> > +#define atomic64_cmpxchg generic_atomic64_cmpxchg
> > +#define atomic64_xchg generic_atomic64_xchg
> > +#define atomic64_fetch_add_unless generic_atomic64_fetch_add_unless
> > +
> > +#endif /* ARCH_ATOMIC */
> >
> > #endif /* _ASM_GENERIC_ATOMIC64_H */
> > diff --git a/lib/atomic64.c b/lib/atomic64.c
> > index e98c85a99787..3df653994177 100644
> > --- a/lib/atomic64.c
> > +++ b/lib/atomic64.c
> > @@ -42,7 +42,7 @@ static inline raw_spinlock_t *lock_addr(const atomic64_t *v)
> > return &atomic64_lock[addr & (NR_LOCKS - 1)].lock;
> > }
> >
> > -s64 atomic64_read(const atomic64_t *v)
> > +s64 generic_atomic64_read(const atomic64_t *v)
> > {
> > unsigned long flags;
> > raw_spinlock_t *lock = lock_addr(v);
> > @@ -53,9 +53,9 @@ s64 atomic64_read(const atomic64_t *v)
> > raw_spin_unlock_irqrestore(lock, flags);
> > return val;
> > }
> > -EXPORT_SYMBOL(atomic64_read);
> > +EXPORT_SYMBOL(generic_atomic64_read);
> >
> > -void atomic64_set(atomic64_t *v, s64 i)
> > +void generic_atomic64_set(atomic64_t *v, s64 i)
> > {
> > unsigned long flags;
> > raw_spinlock_t *lock = lock_addr(v);
> > @@ -64,10 +64,10 @@ void atomic64_set(atomic64_t *v, s64 i)
> > v->counter = i;
> > raw_spin_unlock_irqrestore(lock, flags);
> > }
> > -EXPORT_SYMBOL(atomic64_set);
> > +EXPORT_SYMBOL(generic_atomic64_set);
> >
> > #define ATOMIC64_OP(op, c_op) \
> > -void atomic64_##op(s64 a, atomic64_t *v) \
> > +void generic_atomic64_##op(s64 a, atomic64_t *v) \
> > { \
> > unsigned long flags; \
> > raw_spinlock_t *lock = lock_addr(v); \
> > @@ -76,10 +76,10 @@ void atomic64_##op(s64 a, atomic64_t *v) \
> > v->counter c_op a; \
> > raw_spin_unlock_irqrestore(lock, flags); \
> > } \
> > -EXPORT_SYMBOL(atomic64_##op);
> > +EXPORT_SYMBOL(generic_atomic64_##op);
> >
> > #define ATOMIC64_OP_RETURN(op, c_op) \
> > -s64 atomic64_##op##_return(s64 a, atomic64_t *v) \
> > +s64 generic_atomic64_##op##_return(s64 a, atomic64_t *v) \
> > { \
> > unsigned long flags; \
> > raw_spinlock_t *lock = lock_addr(v); \
> > @@ -90,10 +90,10 @@ s64 atomic64_##op##_return(s64 a, atomic64_t *v) \
> > raw_spin_unlock_irqrestore(lock, flags); \
> > return val; \
> > } \
> > -EXPORT_SYMBOL(atomic64_##op##_return);
> > +EXPORT_SYMBOL(generic_atomic64_##op##_return);
> >
> > #define ATOMIC64_FETCH_OP(op, c_op) \
> > -s64 atomic64_fetch_##op(s64 a, atomic64_t *v) \
> > +s64 generic_atomic64_fetch_##op(s64 a, atomic64_t *v) \
> > { \
> > unsigned long flags; \
> > raw_spinlock_t *lock = lock_addr(v); \
> > @@ -105,7 +105,7 @@ s64 atomic64_fetch_##op(s64 a, atomic64_t *v) \
> > raw_spin_unlock_irqrestore(lock, flags); \
> > return val; \
> > } \
> > -EXPORT_SYMBOL(atomic64_fetch_##op);
> > +EXPORT_SYMBOL(generic_atomic64_fetch_##op);
> >
> > #define ATOMIC64_OPS(op, c_op) \
> > ATOMIC64_OP(op, c_op) \
> > @@ -130,7 +130,7 @@ ATOMIC64_OPS(xor, ^=)
> > #undef ATOMIC64_OP_RETURN
> > #undef ATOMIC64_OP
> >
> > -s64 atomic64_dec_if_positive(atomic64_t *v)
> > +s64 generic_atomic64_dec_if_positive(atomic64_t *v)
> > {
> > unsigned long flags;
> > raw_spinlock_t *lock = lock_addr(v);
> > @@ -143,9 +143,9 @@ s64 atomic64_dec_if_positive(atomic64_t *v)
> > raw_spin_unlock_irqrestore(lock, flags);
> > return val;
> > }
> > -EXPORT_SYMBOL(atomic64_dec_if_positive);
> > +EXPORT_SYMBOL(generic_atomic64_dec_if_positive);
> >
> > -s64 atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
> > +s64 generic_atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
> > {
> > unsigned long flags;
> > raw_spinlock_t *lock = lock_addr(v);
> > @@ -158,9 +158,9 @@ s64 atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
> > raw_spin_unlock_irqrestore(lock, flags);
> > return val;
> > }
> > -EXPORT_SYMBOL(atomic64_cmpxchg);
> > +EXPORT_SYMBOL(generic_atomic64_cmpxchg);
> >
> > -s64 atomic64_xchg(atomic64_t *v, s64 new)
> > +s64 generic_atomic64_xchg(atomic64_t *v, s64 new)
> > {
> > unsigned long flags;
> > raw_spinlock_t *lock = lock_addr(v);
> > @@ -172,9 +172,9 @@ s64 atomic64_xchg(atomic64_t *v, s64 new)
> > raw_spin_unlock_irqrestore(lock, flags);
> > return val;
> > }
> > -EXPORT_SYMBOL(atomic64_xchg);
> > +EXPORT_SYMBOL(generic_atomic64_xchg);
> >
> > -s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
> > +s64 generic_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
> > {
> > unsigned long flags;
> > raw_spinlock_t *lock = lock_addr(v);
> > @@ -188,4 +188,4 @@ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
> >
> > return val;
> > }
> > -EXPORT_SYMBOL(atomic64_fetch_add_unless);
> > +EXPORT_SYMBOL(generic_atomic64_fetch_add_unless);
> > --
> > 2.11.0
> >
>
>
> --
> Best Regards
> Guo Ren
>
> ML: https://lore.kernel.org/linux-csky/

\
 
 \ /
  Last update: 2021-04-22 16:30    [W:0.055 / U:0.160 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site