lkml.org 
[lkml]   [2021]   [Mar]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] Insert SFENCE.VMA in function set_pte_at for RISCV
From
On Tue, 09 Mar 2021 22:22:46 PST (-0800), liu@jiuyang.me wrote:
> From: Jiuyang Liu <liu@jiuyang.me>
>
> This patch inserts SFENCE.VMA after modifying PTE based on RISC-V
> specification.
>
> arch/riscv/include/asm/pgtable.h:
> 1. implement pte_user, pte_global and pte_leaf to check correspond
> attribute of a pte_t.
>
> 2. insert SFENCE.VMA in set_pte_at based on RISC-V Volume 2, Privileged
> Spec v. 20190608 page 66 and 67:
> If software modifies a non-leaf PTE, it should execute SFENCE.VMA with
> rs1=x0. If any PTE along the traversal path had its G bit set, rs2 must
> be x0; otherwise, rs2 should be set to the ASID for which the
> translation is being modified.
> If software modifies a leaf PTE, it should execute SFENCE.VMA with rs1
> set to a virtual address within the page. If any PTE along the traversal
> path had its G bit set, rs2 must be x0; otherwise, rs2 should be set to
> the ASID for which the translation is being modified.
>
> arch/riscv/include/asm/tlbflush.h:
> 1. implement local_flush_tlb_asid to flush tlb with asid.
>
> Signed-off-by: Jiuyang Liu <liu@jiuyang.me>
> ---
> arch/riscv/include/asm/pgtable.h | 28 ++++++++++++++++++++++++++++
> arch/riscv/include/asm/tlbflush.h | 8 ++++++++
> 2 files changed, 36 insertions(+)
>
> diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
> index ebf817c1bdf4..95f6546ddb5b 100644
> --- a/arch/riscv/include/asm/pgtable.h
> +++ b/arch/riscv/include/asm/pgtable.h
> @@ -222,6 +222,16 @@ static inline int pte_write(pte_t pte)
> return pte_val(pte) & _PAGE_WRITE;
> }
>
> +static inline int pte_user(pte_t pte)
> +{
> + return pte_val(pte) & _PAGE_USER;
> +}
> +
> +static inline int pte_global(pte_t pte)
> +{
> + return pte_val(pte) & _PAGE_GLOBAL;
> +}
> +
> static inline int pte_exec(pte_t pte)
> {
> return pte_val(pte) & _PAGE_EXEC;
> @@ -248,6 +258,11 @@ static inline int pte_special(pte_t pte)
> return pte_val(pte) & _PAGE_SPECIAL;
> }
>
> +static inline int pte_leaf(pte_t pte)
> +{
> + return pte_val(pte) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC);
> +}
> +
> /* static inline pte_t pte_rdprotect(pte_t pte) */
>
> static inline pte_t pte_wrprotect(pte_t pte)
> @@ -358,6 +370,18 @@ static inline void set_pte_at(struct mm_struct *mm,
> flush_icache_pte(pteval);
>
> set_pte(ptep, pteval);
> +
> + if (pte_present(pteval)) {
> + if (pte_leaf(pteval)) {
> + local_flush_tlb_page(addr);
> + } else {
> + if (pte_global(pteval))
> + local_flush_tlb_all();
> + else
> + local_flush_tlb_asid();
> +
> + }
> + }
> }
>
> static inline void pte_clear(struct mm_struct *mm,
> diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h
> index 394cfbccdcd9..4b25f51f163d 100644
> --- a/arch/riscv/include/asm/tlbflush.h
> +++ b/arch/riscv/include/asm/tlbflush.h
> @@ -21,6 +21,14 @@ static inline void local_flush_tlb_page(unsigned long addr)
> {
> __asm__ __volatile__ ("sfence.vma %0" : : "r" (addr) : "memory");
> }
> +
> +static unsigned long asid;
> +static inline void local_flush_tlb_asid(void)
> +{
> + asid = csr_read(CSR_SATP) | (SATP_ASID_MASK << SATP_ASID_SHIFT);
> + __asm__ __volatile__ ("sfence.vma x0, %0" : : "r" (asid) : "memory");
> +}
> +
> #else /* CONFIG_MMU */
> #define local_flush_tlb_all() do { } while (0)
> #define local_flush_tlb_page(addr) do { } while (0)

We're trying to avoid this sort of thing, instead relying on the generic kernel
functionality to batch up page table modifications before we issue the fences.
If you're seeing some specific issue then I'd be happy to try and sort out a
fix for it, but this is a bit heavy-handed to use as anything but a last
resort.

\
 
 \ /
  Last update: 2021-03-17 05:19    [W:0.119 / U:0.204 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site