lkml.org 
[lkml]   [2018]   [Jun]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCHv2 03/16] atomics/treewide: make atomic64_inc_not_zero() optional
From
On Tue, 29 May 2018 08:43:33 PDT (-0700), mark.rutland@arm.com wrote:
> We define a trivial fallback for atomic_inc_not_zero(), but don't do
> the same for atmic64_inc_not_zero(), leading most architectures to
> define the same boilerplate.

atmic64

> Let's add a fallback in <linux/atomic.h>, and remove the redundant
> implementations. Note that atomic64_add_unless() is always defined in
> <linux/atomic.h>, and promotes its arguments to the requisite types, so
> we need not do this explicitly.
>
> There should be no functional change as a result of this patch.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
> arch/alpha/include/asm/atomic.h | 2 --
> arch/arc/include/asm/atomic.h | 1 -
> arch/arm/include/asm/atomic.h | 1 -
> arch/arm64/include/asm/atomic.h | 2 --
> arch/ia64/include/asm/atomic.h | 2 --
> arch/mips/include/asm/atomic.h | 2 --
> arch/parisc/include/asm/atomic.h | 2 --
> arch/powerpc/include/asm/atomic.h | 1 +
> arch/riscv/include/asm/atomic.h | 7 -------
> arch/s390/include/asm/atomic.h | 1 -
> arch/sparc/include/asm/atomic_64.h | 2 --
> arch/x86/include/asm/atomic64_32.h | 2 +-
> arch/x86/include/asm/atomic64_64.h | 2 --
> include/asm-generic/atomic-instrumented.h | 3 +++
> include/asm-generic/atomic64.h | 1 -
> include/linux/atomic.h | 11 +++++++++++
> 16 files changed, 16 insertions(+), 26 deletions(-)
> [...]
> diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
> index 0e27e050ba14..18259e90f57e 100644
> --- a/arch/riscv/include/asm/atomic.h
> +++ b/arch/riscv/include/asm/atomic.h
> @@ -375,13 +375,6 @@ static __always_inline int atomic64_add_unless(atomic64_t *v, long a, long u)
> }
> #endif
>
> -#ifndef CONFIG_GENERIC_ATOMIC64
> -static __always_inline long atomic64_inc_not_zero(atomic64_t *v)
> -{
> - return atomic64_add_unless(v, 1, 0);
> -}
> -#endif
> -
> /*
> * atomic_{cmp,}xchg is required to have exactly the same ordering semantics as
> * {cmp,}xchg and the operations that return, so they need a full barrier.

Acked-by: Palmer Dabbelt <palmer@sifive.com>

Thanks!

\
 
 \ /
  Last update: 2018-06-05 01:19    [W:0.149 / U:2.728 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site