Messages in this thread Patch in this message | | | From | Alistair Francis <> | Subject | [PATCH v3] riscv: Ensure only ASIDLEN is used for sfence.vma | Date | Thu, 31 Mar 2022 15:59:06 +1000 |
| |
From: Alistair Francis <alistair.francis@wdc.com>
When we set the value of context.id using __new_context() we set both the asid and the current_version with this return statement in __new_context():
return asid | ver;
This means that when local_flush_tlb_all_asid() is called with the asid specified from context.id we can write the incorrect value.
We get away with this as hardware ignores the extra bits, as the RISC-V specification states:
"bits SXLEN-1:ASIDMAX of the value held in rs2 are reserved for future standard use. Until their use is defined by a standard extension, they should be zeroed by software and ignored by current implementations."
but it is still a bug and worth addressing as we are incorrectly setting extra bits.
This patch uses asid_mask when calling sfence.vma to ensure the asid is always the correct len (ASIDLEN). This is similar to what we do in arch/riscv/mm/context.c.
Fixes: 3f1e782998cd ("riscv: add ASID-based tlbflushing methods") Signed-off-by: Alistair Francis <alistair.francis@wdc.com> --- v3: - Use helper function v2: - Pass in pre-masked value
arch/riscv/include/asm/mmu_context.h | 2 ++ arch/riscv/mm/context.c | 5 +++++ arch/riscv/mm/tlbflush.c | 2 +- 3 files changed, 8 insertions(+), 1 deletion(-)
diff --git a/arch/riscv/include/asm/mmu_context.h b/arch/riscv/include/asm/mmu_context.h index 7030837adc1a..94e82c9e17eb 100644 --- a/arch/riscv/include/asm/mmu_context.h +++ b/arch/riscv/include/asm/mmu_context.h @@ -16,6 +16,8 @@ void switch_mm(struct mm_struct *prev, struct mm_struct *next, struct task_struct *task); +unsigned long get_mm_asid(struct mm_struct *mm); + #define activate_mm activate_mm static inline void activate_mm(struct mm_struct *prev, struct mm_struct *next) diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c index 7acbfbd14557..14aec5bacbc1 100644 --- a/arch/riscv/mm/context.c +++ b/arch/riscv/mm/context.c @@ -302,6 +302,11 @@ static inline void flush_icache_deferred(struct mm_struct *mm, unsigned int cpu) #endif } +unsigned long get_mm_asid(struct mm_struct *mm) +{ + return atomic_long_read(&mm->context.id) & asid_mask; +} + void switch_mm(struct mm_struct *prev, struct mm_struct *next, struct task_struct *task) { diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 37ed760d007c..9c89c4951bee 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -42,7 +42,7 @@ static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, /* check if the tlbflush needs to be sent to other CPUs */ broadcast = cpumask_any_but(cmask, cpuid) < nr_cpu_ids; if (static_branch_unlikely(&use_asid_allocator)) { - unsigned long asid = atomic_long_read(&mm->context.id); + unsigned long asid = get_mm_asid(mm); if (broadcast) { sbi_remote_sfence_vma_asid(cmask, start, size, asid); -- 2.35.1
| |