lkml.org 
[lkml]   [2022]   [Jun]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] mm: hugetlb: kill set_huge_swap_pte_at()
From


On 2022/6/27 14:18, Anshuman Khandual wrote:
>
>
> On 6/26/22 20:27, Qi Zheng wrote:
>> The commit e5251fd43007 ("mm/hugetlb: introduce set_huge_swap_pte_at()
>> helper") add set_huge_swap_pte_at() to handle swap entries on
>> architectures that support hugepages consisting of contiguous ptes.
>> And currently the set_huge_swap_pte_at() is only overridden by arm64.
>>
>> The set_huge_swap_pte_at() provide a sz parameter to help determine
>> the number of entries to be updated. But in fact, all hugetlb swap
>> entries contain pfn information, so we can find the corresponding
>> folio through the pfn recorded in the swap entry, then the folio_size()
>> is the number of entries that need to be updated.
>>
>> And considering that users will easily cause bugs by ignoring the
>> difference between set_huge_swap_pte_at() and set_huge_pte_at().
>> Let's handle swap entries in set_huge_pte_at() and remove the
>> set_huge_swap_pte_at(), then we can call set_huge_pte_at()
>> anywhere, which simplifies our coding.
>>
>> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
>> ---
>> arch/arm64/include/asm/hugetlb.h | 3 ---
>> arch/arm64/mm/hugetlbpage.c | 34 ++++++++++++++++----------------
>> include/linux/hugetlb.h | 13 ------------
>> mm/hugetlb.c | 8 +++-----
>> mm/rmap.c | 11 +++--------
>> 5 files changed, 23 insertions(+), 46 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
>> index 1fd2846dbefe..d20f5da2d76f 100644
>> --- a/arch/arm64/include/asm/hugetlb.h
>> +++ b/arch/arm64/include/asm/hugetlb.h
>> @@ -46,9 +46,6 @@ extern void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
>> pte_t *ptep, unsigned long sz);
>> #define __HAVE_ARCH_HUGE_PTEP_GET
>> extern pte_t huge_ptep_get(pte_t *ptep);
>> -extern void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr,
>> - pte_t *ptep, pte_t pte, unsigned long sz);
>> -#define set_huge_swap_pte_at set_huge_swap_pte_at
>>
>> void __init arm64_hugetlb_cma_reserve(void);
>>
>> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
>> index c9e076683e5d..58b89b9d13e0 100644
>> --- a/arch/arm64/mm/hugetlbpage.c
>> +++ b/arch/arm64/mm/hugetlbpage.c
>> @@ -238,6 +238,13 @@ static void clear_flush(struct mm_struct *mm,
>> flush_tlb_range(&vma, saddr, addr);
>> }
>>
>> +static inline struct folio *hugetlb_swap_entry_to_folio(swp_entry_t entry)
>> +{
>> + VM_BUG_ON(!is_migration_entry(entry) && !is_hwpoison_entry(entry));
>> +
>> + return page_folio(pfn_to_page(swp_offset(entry)));
>> +}
>
> Extracting this huge page size from swap entry is an additional operation which
> will increase the over all cost for set_huge_swap_pte_at(). At present the size

Hmm, I think this cost is very small. And replacing
set_huge_swap_pte_at() by transparently handling swap entries helps
reduce possible bugs, which is worthwhile.

> value is readily available near set_huge_swap_pte_at() call sites.

--
Thanks,
Qi

\
 
 \ /
  Last update: 2022-06-27 08:56    [W:0.142 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site