lkml.org 
[lkml]   [2023]   [Jul]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH] asm-generic: ticket-lock: Optimize arch_spin_value_unlocked
    On Sat, Jul 22, 2023 at 10:07:19PM -0400, Waiman Long wrote:
    > On 7/19/23 03:00, guoren@kernel.org wrote:
    > > From: Guo Ren <guoren@linux.alibaba.com>
    > >
    > > Using arch_spinlock_is_locked would cause another unnecessary memory
    > > access to the contended value. Although it won't cause a significant
    > > performance gap in most architectures, the arch_spin_value_unlocked
    > > argument contains enough information. Thus, remove unnecessary
    > > atomic_read in arch_spin_value_unlocked().
    >
    > AFAICS, only one memory access is needed for the current
    > arch_spinlock_is_locked(). So your description isn't quite right. OTOH,
    Okay, I would improve it. Here means "arch_spin_value_unlocked using
    arch_spinlock_is_locked" would cause "an" unnecessary ...

    > caller of arch_spin_value_unlocked() could benefit from this change.
    > Currently, the only caller is lockref.
    Thx for comment, I would add it in the commit msg.

    New version is here:
    https://lore.kernel.org/linux-riscv/20230731023308.3748432-1-guoren@kernel.org/

    >
    > Other than that, the patch looks good to me.
    >
    > Cheers,
    > Longman
    >
    > >
    > > Signed-off-by: Guo Ren <guoren@kernel.org>
    > > Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
    > > Cc: David Laight <David.Laight@ACULAB.COM>
    > > Cc: Peter Zijlstra <peterz@infradead.org>
    > > ---
    > > Changelog:
    > > This patch is separate from:
    > > https://lore.kernel.org/linux-riscv/20220808071318.3335746-1-guoren@kernel.org/
    > >
    > > Peter & David have commented on it:
    > > https://lore.kernel.org/linux-riscv/YsK4Z9w0tFtgkni8@hirez.programming.kicks-ass.net/
    > > ---
    > > include/asm-generic/spinlock.h | 16 +++++++++-------
    > > 1 file changed, 9 insertions(+), 7 deletions(-)
    > >
    > > diff --git a/include/asm-generic/spinlock.h b/include/asm-generic/spinlock.h
    > > index fdfebcb050f4..90803a826ba0 100644
    > > --- a/include/asm-generic/spinlock.h
    > > +++ b/include/asm-generic/spinlock.h
    > > @@ -68,11 +68,18 @@ static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
    > > smp_store_release(ptr, (u16)val + 1);
    > > }
    > > +static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
    > > +{
    > > + u32 val = lock.counter;
    > > +
    > > + return ((val >> 16) == (val & 0xffff));
    > > +}
    > > +
    > > static __always_inline int arch_spin_is_locked(arch_spinlock_t *lock)
    > > {
    > > - u32 val = atomic_read(lock);
    > > + arch_spinlock_t val = READ_ONCE(*lock);
    > > - return ((val >> 16) != (val & 0xffff));
    > > + return !arch_spin_value_unlocked(val);
    > > }
    > > static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock)
    > > @@ -82,11 +89,6 @@ static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock)
    > > return (s16)((val >> 16) - (val & 0xffff)) > 1;
    > > }
    > > -static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
    > > -{
    > > - return !arch_spin_is_locked(&lock);
    > > -}
    > > -
    > > #include <asm/qrwlock.h>
    > > #endif /* __ASM_GENERIC_SPINLOCK_H */
    >
    >
    > _______________________________________________
    > linux-riscv mailing list
    > linux-riscv@lists.infradead.org
    > http://lists.infradead.org/mailman/listinfo/linux-riscv
    >

    \
     
     \ /
      Last update: 2023-07-31 04:41    [W:4.171 / U:0.460 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site