lkml.org 
[lkml]   [2018]   [Jul]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] MIPS: Change definition of cpu_relax() for Loongson-3
On Tue, Jul 17, 2018 at 10:52:32AM -0700, Paul Burton wrote:
> On Fri, Jul 13, 2018 at 03:37:57PM +0800, Huacai Chen wrote:
> > Linux expects that if a CPU modifies a memory location, then that
> > modification will eventually become visible to other CPUs in the system.
> >
> > On Loongson-3 processor with SFB (Store Fill Buffer), loads may be
> > prioritised over stores so it is possible for a store operation to be
> > postponed if a polling loop immediately follows it. If the variable
> > being polled indirectly depends on the outstanding store [for example,
> > another CPU may be polling the variable that is pending modification]
> > then there is the potential for deadlock if interrupts are disabled.
> > This deadlock occurs in qspinlock code.
> >
> > This patch changes the definition of cpu_relax() to smp_mb() for
> > Loongson-3, forcing a flushing of the SFB on SMP systems before the
> > next load takes place. If the Kernel is not compiled for SMP support,
> > this will expand to a barrier() as before.
> >
> > References: 534be1d5a2da940 (ARM: 6194/1: change definition of cpu_relax() for ARM11MPCore)
> > Cc: stable@vger.kernel.org
> > Signed-off-by: Huacai Chen <chenhc@lemote.com>
> > ---
> > arch/mips/include/asm/processor.h | 10 ++++++++++
> > 1 file changed, 10 insertions(+)
> >
> > diff --git a/arch/mips/include/asm/processor.h b/arch/mips/include/asm/processor.h
> > index af34afb..a8c4a3a 100644
> > --- a/arch/mips/include/asm/processor.h
> > +++ b/arch/mips/include/asm/processor.h
> > @@ -386,7 +386,17 @@ unsigned long get_wchan(struct task_struct *p);
> > #define KSTK_ESP(tsk) (task_pt_regs(tsk)->regs[29])
> > #define KSTK_STATUS(tsk) (task_pt_regs(tsk)->cp0_status)
> >
> > +#ifdef CONFIG_CPU_LOONGSON3
> > +/*
> > + * Loongson-3's SFB (Store-Fill-Buffer) may get starved when stuck in a read
> > + * loop. Since spin loops of any kind should have a cpu_relax() in them, force
> > + * a Store-Fill-Buffer flush from cpu_relax() such that any pending writes will
> > + * become available as expected.
> > + */
>
> I think "may starve writes" or "may queue writes indefinitely" would be
> clearer than "may get starved".

Agreed.

> > +#define cpu_relax() smp_mb()
> > +#else
> > #define cpu_relax() barrier()
> > +#endif
> >
> > /*
> > * Return_address is a replacement for __builtin_return_address(count)
>
> Apart from the comment above though this looks better to me.
>
> Re-copying the LKMM maintainers - are you happy(ish) with this?

Right, thanks for adding us back on :-)

Yes, this is much better, although I myself would also prefer explicit
mention that this is a work-around for a hardware bug.

But aside from the actual comment bike-shedding, this looks entirely
acceptible (also because ARM is already doing this -- and the Changelog
might want to refer to that particular patch).

\
 
 \ /
  Last update: 2018-07-17 20:48    [W:1.350 / U:0.448 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site