lkml.org 
[lkml]   [2022]   [Jun]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Date
SubjectRe: [PATCH V7 4/5] asm-generic: spinlock: Add combo spinlock (ticket & queued)
On Wed, Jun 29, 2022 at 9:34 AM Waiman Long <longman@redhat.com> wrote:
>
> On 6/28/22 21:17, Guo Ren wrote:
> > On Wed, Jun 29, 2022 at 2:13 AM Waiman Long <longman@redhat.com> wrote:
> >> On 6/28/22 04:17, guoren@kernel.org wrote:
> >>> From: Guo Ren <guoren@linux.alibaba.com>
> >>>
> >>> Some architecture has a flexible requirement on the type of spinlock.
> >>> Some LL/SC architectures of ISA don't force micro-arch to give a strong
> >>> forward guarantee. Thus different kinds of memory model micro-arch would
> >>> come out in one ISA. The ticket lock is suitable for exclusive monitor
> >>> designed LL/SC micro-arch with limited cores and "!NUMA". The
> >>> queue-spinlock could deal with NUMA/large-scale scenarios with a strong
> >>> forward guarantee designed LL/SC micro-arch.
> >>>
> >>> So, make the spinlock a combo with feature.
> >>>
> >>> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> >>> Signed-off-by: Guo Ren <guoren@kernel.org>
> >>> Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
> >>> Cc: Arnd Bergmann <arnd@arndb.de>
> >>> Cc: Palmer Dabbelt <palmer@rivosinc.com>
> >>> ---
> >>> include/asm-generic/spinlock.h | 43 ++++++++++++++++++++++++++++++++--
> >>> kernel/locking/qspinlock.c | 2 ++
> >>> 2 files changed, 43 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/include/asm-generic/spinlock.h b/include/asm-generic/spinlock.h
> >>> index f41dc7c2b900..a9b43089bf99 100644
> >>> --- a/include/asm-generic/spinlock.h
> >>> +++ b/include/asm-generic/spinlock.h
> >>> @@ -28,34 +28,73 @@
> >>> #define __ASM_GENERIC_SPINLOCK_H
> >>>
> >>> #include <asm-generic/ticket_spinlock.h>
> >>> +#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS
> >>> +#include <linux/jump_label.h>
> >>> +#include <asm-generic/qspinlock.h>
> >>> +
> >>> +DECLARE_STATIC_KEY_TRUE(use_qspinlock_key);
> >>> +#endif
> >>> +
> >>> +#undef arch_spin_is_locked
> >>> +#undef arch_spin_is_contended
> >>> +#undef arch_spin_value_unlocked
> >>> +#undef arch_spin_lock
> >>> +#undef arch_spin_trylock
> >>> +#undef arch_spin_unlock
> >>>
> >>> static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
> >>> {
> >>> - ticket_spin_lock(lock);
> >>> +#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS
> >>> + if (static_branch_likely(&use_qspinlock_key))
> >>> + queued_spin_lock(lock);
> >>> + else
> >>> +#endif
> >>> + ticket_spin_lock(lock);
> >>> }
> >> Why do you use a static key to control whether to use qspinlock or
> >> ticket lock? In the next patch, you have
> >>
> >> +#if !defined(CONFIG_NUMA) && defined(CONFIG_QUEUED_SPINLOCKS)
> >> + static_branch_disable(&use_qspinlock_key);
> >> +#endif
> >>
> >> So the current config setting determines if qspinlock will be used, not
> >> some boot time parameter that user needs to specify. This patch will
> >> just add useless code to lock/unlock sites. I don't see any benefit of
> >> doing that.
> > This is a startup patch for riscv. next, we could let vendors make choices.
> > I'm not sure they like cmdline or vendor-specific errata style.
> >
> > Eventually, we would let one riscv Image support all machines, some
> > use ticket-lock, and some use qspinlock.
>
> OK. Maybe you can postpone this combo spinlock until there is a good use
> case for it. Upstream usually don't accept patches that have no good use
> case yet.
>
> Cheers,
> Longman
>

I would add a cmdline to control the choice of qspinlock/ticket-lock
in the next version.

diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
index b9b234157a66..5ade490c2f27 100644
--- a/arch/riscv/kernel/setup.c
+++ b/arch/riscv/kernel/setup.c
@@ -270,6 +270,10 @@ void __init setup_arch(char **cmdline_p)

early_ioremap_setup();
jump_label_init();
+
+#if !defined(CONFIG_NUMA) && defined(CONFIG_QUEUED_SPINLOCKS)
+ static_branch_disable(&use_qspinlock_key);
+#endif
parse_early_param();

efi_init();
@@ -295,10 +299,6 @@ void __init setup_arch(char **cmdline_p)
setup_smp();
#endif

-#if !defined(CONFIG_NUMA) && defined(CONFIG_QUEUED_SPINLOCKS)
- static_branch_disable(&use_qspinlock_key);
-#endif
-
riscv_fill_hwcap();
apply_boot_alternatives();
}
@@ -330,3 +330,13 @@ void free_initmem(void)

free_initmem_default(POISON_FREE_INITMEM);
}
+
+#ifdef CONFIG_QUEUED_SPINLOCKS
+static int __init disable_qspinlock(char *p)
+{
+ static_branch_disable(&use_qspinlock_key);
+ return 0;
+}
+
+early_param("disable_qspinlock", disable_qspinlock);
+#endif

--
Best Regards
Guo Ren

ML: https://lore.kernel.org/linux-csky/

\
 
 \ /
  Last update: 2022-06-29 04:32    [W:1.792 / U:0.112 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site