lkml.org 
[lkml]   [2022]   [Nov]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH v6 2/2] arm64: support batched/deferred tlb shootdown during page reclamation
Date
On Nov 14, 2022, at 7:14 PM, Yicong Yang <yangyicong@huawei.com> wrote:

> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
> index 8a497d902c16..5bd78ae55cd4 100644
> --- a/arch/x86/include/asm/tlbflush.h
> +++ b/arch/x86/include/asm/tlbflush.h
> @@ -264,7 +264,8 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm)
> }
>
> static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
> - struct mm_struct *mm)
> + struct mm_struct *mm,
> + unsigned long uaddr)

Logic-wise it looks fine. I notice the “v6", and it should not be blocking,
but I would note that the name "arch_tlbbatch_add_mm()” does not make much
sense once the function also takes an address.

It could’ve been something like arch_set_tlb_ubc_flush_pending() but that’s
too long. I’m not very good with naming, but the current name is not great.


\
 
 \ /
  Last update: 2022-11-16 00:39    [W:0.083 / U:0.112 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site