Messages in this thread | | | Date | Wed, 21 Jun 2023 14:15:26 +0800 | Subject | Re: [PATCH] riscv: mm: try VMA lock-based page fault handling first | From | Kefeng Wang <> |
| |
On 2023/5/24 0:59, Jisheng Zhang wrote: > Attempt VMA lock-based page fault handling first, and fall back to the > existing mmap_lock-based handling if that fails. > > A simple running the ebizzy benchmark on Lichee Pi 4A shows that > PER_VMA_LOCK can improve the ebizzy benchmark by about 32.68%. In > theory, the more CPUs, the bigger improvement, but I don't have any > HW platform which has more than 4 CPUs. > > This is the riscv variant of "x86/mm: try VMA lock-based page fault > handling first". > > Signed-off-by: Jisheng Zhang <jszhang@kernel.org> > --- > Any performance numbers are welcome! Especially the numbers on HW > platforms with 8 or more CPUs. > > arch/riscv/Kconfig | 1 + > arch/riscv/mm/fault.c | 33 +++++++++++++++++++++++++++++++++ > 2 files changed, 34 insertions(+) > > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig > index 62e84fee2cfd..b958f67f9a12 100644 > --- a/arch/riscv/Kconfig > +++ b/arch/riscv/Kconfig > @@ -42,6 +42,7 @@ config RISCV > select ARCH_SUPPORTS_DEBUG_PAGEALLOC if MMU > select ARCH_SUPPORTS_HUGETLBFS if MMU > select ARCH_SUPPORTS_PAGE_TABLE_CHECK if MMU > + select ARCH_SUPPORTS_PER_VMA_LOCK if MMU
no need if mmu, see PER_VMA_LOCK
config PER_VMA_LOCK bool "allow VMA lock-based page fault" def_bool y depends on ARCH_SUPPORTS_PER_VMA_LOCK && MMU && SMP
Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> select ARCH_USE_MEMTEST > select ARCH_USE_QUEUED_RWLOCKS > select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU > diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c > index 8685f85a7474..eccdddf26f4b 100644 > --- a/arch/riscv/mm/fault.c > +++ b/arch/riscv/mm/fault.c > @@ -286,6 +286,36 @@ void handle_page_fault(struct pt_regs *regs) > flags |= FAULT_FLAG_WRITE; > else if (cause == EXC_INST_PAGE_FAULT) > flags |= FAULT_FLAG_INSTRUCTION; > +#ifdef CONFIG_PER_VMA_LOCK > + if (!(flags & FAULT_FLAG_USER)) > + goto lock_mmap; > + > + vma = lock_vma_under_rcu(mm, addr); > + if (!vma) > + goto lock_mmap; > + > + if (unlikely(access_error(cause, vma))) { > + vma_end_read(vma); > + goto lock_mmap; > + } > + > + fault = handle_mm_fault(vma, addr, flags | FAULT_FLAG_VMA_LOCK, regs); > + vma_end_read(vma); > + > + if (!(fault & VM_FAULT_RETRY)) { > + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); > + goto done; > + } > + count_vm_vma_lock_event(VMA_LOCK_RETRY); > + > + if (fault_signal_pending(fault, regs)) { > + if (!user_mode(regs)) > + no_context(regs, addr); > + return; > + } > +lock_mmap: > +#endif /* CONFIG_PER_VMA_LOCK */ > + > retry: > mmap_read_lock(mm); > vma = find_vma(mm, addr); > @@ -355,6 +385,9 @@ void handle_page_fault(struct pt_regs *regs) > > mmap_read_unlock(mm); > > +#ifdef CONFIG_PER_VMA_LOCK > +done: > +#endif > if (unlikely(fault & VM_FAULT_ERROR)) { > tsk->thread.bad_cause = cause; > mm_fault_error(regs, addr, fault);
| |