lkml.org 
[lkml]   [2019]   [Jun]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2] RISC-V: Break load reservations during switch_to
On 19-06-07 15:22:22, Palmer Dabbelt wrote:
> The comment describes why in detail. This was found because QEMU never
> gives up load reservations, the issue is unlikely to manifest on real
> hardware.

Makes sense, however it obviously will not help until qemu actually
clears load reservations on SC (or otherwise handles the interleaved
SC case).

See comment inline.

> Thanks to Carlos Eduardo for finding the bug!
>
> Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
> ---
> Changes since v1 <20190605231735.26581-1-palmer@sifive.com>:
>
> * REG_SC is now defined as a helper macro, for any code that wants to SC
> a register-sized value.
> * The explicit #ifdef to check that TASK_THREAD_RA_RA is 0 has been
> removed. Instead we rely on the assembler to catch non-zero SC
> offsets. I've tested this does actually work.
>
> arch/riscv/include/asm/asm.h | 1 +
> arch/riscv/kernel/entry.S | 11 +++++++++++
> 2 files changed, 12 insertions(+)
>
> diff --git a/arch/riscv/include/asm/asm.h b/arch/riscv/include/asm/asm.h
> index 5ad4cb622bed..946b671f996c 100644
> --- a/arch/riscv/include/asm/asm.h
> +++ b/arch/riscv/include/asm/asm.h
> @@ -30,6 +30,7 @@
>
> #define REG_L __REG_SEL(ld, lw)
> #define REG_S __REG_SEL(sd, sw)
> +#define REG_SC __REG_SEL(sc.w, sc.d)

The instructions appear to be inverted here (i.e. "sc.d, sc.w").

> #define SZREG __REG_SEL(8, 4)
> #define LGREG __REG_SEL(3, 2)
>
> diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
> index 1c1ecc238cfa..af685782af17 100644
> --- a/arch/riscv/kernel/entry.S
> +++ b/arch/riscv/kernel/entry.S
> @@ -330,6 +330,17 @@ ENTRY(__switch_to)
> add a3, a0, a4
> add a4, a1, a4
> REG_S ra, TASK_THREAD_RA_RA(a3)
> + /*
> + * The Linux ABI allows programs to depend on load reservations being
> + * broken on context switches, but the ISA doesn't require that the
> + * hardware ever breaks a load reservation. The only way to break a
> + * load reservation is with a store conditional, so we emit one here.
> + * Since nothing ever takes a load reservation on TASK_THREAD_RA_RA we
> + * know this will always fail, but just to be on the safe side this
> + * writes the same value that was unconditionally written by the
> + * previous instruction.
> + */
> + REG_SC x0, ra, TASK_THREAD_RA_RA(a3)
> REG_S sp, TASK_THREAD_SP_RA(a3)
> REG_S s0, TASK_THREAD_S0_RA(a3)
> REG_S s1, TASK_THREAD_S1_RA(a3)
> --
> 2.21.0
>

\
 
 \ /
  Last update: 2019-06-16 20:01    [W:0.115 / U:0.004 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site