lkml.org 
[lkml]   [2022]   [Nov]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v6 2/2] arm64: support batched/deferred tlb shootdown during page reclamation
From
Date
On 2022/11/16 7:38, Nadav Amit wrote:
> On Nov 14, 2022, at 7:14 PM, Yicong Yang <yangyicong@huawei.com> wrote:
>
>> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
>> index 8a497d902c16..5bd78ae55cd4 100644
>> --- a/arch/x86/include/asm/tlbflush.h
>> +++ b/arch/x86/include/asm/tlbflush.h
>> @@ -264,7 +264,8 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm)
>> }
>>
>> static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
>> - struct mm_struct *mm)
>> + struct mm_struct *mm,
>> + unsigned long uaddr)
>
> Logic-wise it looks fine. I notice the “v6", and it should not be blocking,
> but I would note that the name "arch_tlbbatch_add_mm()” does not make much
> sense once the function also takes an address.
>

ok the add_mm should still apply to x86 since the address is not used, but not for arm64.

> It could’ve been something like arch_set_tlb_ubc_flush_pending() but that’s
> too long. I’m not very good with naming, but the current name is not great.
>

What about arch_tlbbatch_add_pending()? Considering the x86 is pending the flush operation
while arm64 is pending the sychronization operation, arch_tlbbatch_add_pending() should
make sense to both.

Thanks.

\
 
 \ /
  Last update: 2022-11-16 02:51    [W:0.060 / U:0.588 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site