lkml.org 
[lkml]   [2022]   [Jul]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [RESEND PATCH v3] arm64: enable THP_SWAP for arm64
From


On 7/19/22 06:53, Barry Song wrote:
> On Tue, Jul 19, 2022 at 12:44 PM Huang, Ying <ying.huang@intel.com> wrote:
>>
>> Barry Song <21cnbao@gmail.com> writes:
>>
>>> From: Barry Song <v-songbaohua@oppo.com>
>>>
>>> THP_SWAP has been proven to improve the swap throughput significantly
>>> on x86_64 according to commit bd4c82c22c367e ("mm, THP, swap: delay
>>> splitting THP after swapped out").
>>> As long as arm64 uses 4K page size, it is quite similar with x86_64
>>> by having 2MB PMD THP. THP_SWAP is architecture-independent, thus,
>>> enabling it on arm64 will benefit arm64 as well.
>>> A corner case is that MTE has an assumption that only base pages
>>> can be swapped. We won't enable THP_SWAP for ARM64 hardware with
>>> MTE support until MTE is reworked to coexist with THP_SWAP.
>>>
>>> A micro-benchmark is written to measure thp swapout throughput as
>>> below,
>>>
>>> unsigned long long tv_to_ms(struct timeval tv)
>>> {
>>> return tv.tv_sec * 1000 + tv.tv_usec / 1000;
>>> }
>>>
>>> main()
>>> {
>>> struct timeval tv_b, tv_e;;
>>> #define SIZE 400*1024*1024
>>> volatile void *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE,
>>> MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
>>> if (!p) {
>>> perror("fail to get memory");
>>> exit(-1);
>>> }
>>>
>>> madvise(p, SIZE, MADV_HUGEPAGE);
>>> memset(p, 0x11, SIZE); /* write to get mem */
>>>
>>> gettimeofday(&tv_b, NULL);
>>> madvise(p, SIZE, MADV_PAGEOUT);
>>> gettimeofday(&tv_e, NULL);
>>>
>>> printf("swp out bandwidth: %ld bytes/ms\n",
>>> SIZE/(tv_to_ms(tv_e) - tv_to_ms(tv_b)));
>>> }
>>>
>>> Testing is done on rk3568 64bit quad core processor Quad Core
>>> Cortex-A55 platform - ROCK 3A.
>>> thp swp throughput w/o patch: 2734bytes/ms (mean of 10 tests)
>>> thp swp throughput w/ patch: 3331bytes/ms (mean of 10 tests)
>>>
>>> Cc: "Huang, Ying" <ying.huang@intel.com>
>>> Cc: Minchan Kim <minchan@kernel.org>
>>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>>> Cc: Hugh Dickins <hughd@google.com>
>>> Cc: Andrea Arcangeli <aarcange@redhat.com>
>>> Cc: Anshuman Khandual <anshuman.khandual@arm.com>
>>> Cc: Steven Price <steven.price@arm.com>
>>> Cc: Yang Shi <shy828301@gmail.com>
>>> Signed-off-by: Barry Song <v-songbaohua@oppo.com>
>>> ---
>>> -v3:
>>> * refine the commit log;
>>> * add a benchmark result;
>>> * refine the macro of arch_thp_swp_supported
>>> Thanks to the comments of Anshuman, Andrew, Steven
>>>
>>> arch/arm64/Kconfig | 1 +
>>> arch/arm64/include/asm/pgtable.h | 6 ++++++
>>> include/linux/huge_mm.h | 12 ++++++++++++
>>> mm/swap_slots.c | 2 +-
>>> 4 files changed, 20 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>>> index 1652a9800ebe..e1c540e80eec 100644
>>> --- a/arch/arm64/Kconfig
>>> +++ b/arch/arm64/Kconfig
>>> @@ -101,6 +101,7 @@ config ARM64
>>> select ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
>>> select ARCH_WANT_LD_ORPHAN_WARN
>>> select ARCH_WANTS_NO_INSTR
>>> + select ARCH_WANTS_THP_SWAP if ARM64_4K_PAGES
>>> select ARCH_HAS_UBSAN_SANITIZE_ALL
>>> select ARM_AMBA
>>> select ARM_ARCH_TIMER
>>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>>> index 0b6632f18364..78d6f6014bfb 100644
>>> --- a/arch/arm64/include/asm/pgtable.h
>>> +++ b/arch/arm64/include/asm/pgtable.h
>>> @@ -45,6 +45,12 @@
>>> __flush_tlb_range(vma, addr, end, PUD_SIZE, false, 1)
>>> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>>>
>>> +static inline bool arch_thp_swp_supported(void)
>>> +{
>>> + return !system_supports_mte();
>>> +}
>>> +#define arch_thp_swp_supported arch_thp_swp_supported
>>> +
>>> /*
>>> * Outside of a few very special situations (e.g. hibernation), we always
>>> * use broadcast TLB invalidation instructions, therefore a spurious page
>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>>> index de29821231c9..4ddaf6ad73ef 100644
>>> --- a/include/linux/huge_mm.h
>>> +++ b/include/linux/huge_mm.h
>>> @@ -461,4 +461,16 @@ static inline int split_folio_to_list(struct folio *folio,
>>> return split_huge_page_to_list(&folio->page, list);
>>> }
>>>
>>> +/*
>>> + * archs that select ARCH_WANTS_THP_SWAP but don't support THP_SWP due to
>>> + * limitations in the implementation like arm64 MTE can override this to
>>> + * false
>>> + */
>>> +#ifndef arch_thp_swp_supported
>>> +static inline bool arch_thp_swp_supported(void)
>>> +{
>>> + return true;
>>> +}
>>
>> How about the following?
>>
>> static inline bool arch_wants_thp_swap(void)
>> {
>> return IS_ENABLED(ARCH_WANTS_THP_SWAP);
>> }
>
> This looks good. then i'll need to change arm64 to
>
> +static inline bool arch_thp_swp_supported(void)
> +{
> + return IS_ENABLED(ARCH_WANTS_THP_SWAP) && !system_supports_mte();
> +}

Why ? CONFIG_THP_SWAP depends on ARCH_WANTS_THP_SWAP. In folio_alloc_swap(),
IS_ENABLED(CONFIG_THP_SWAP) enabled, will also imply ARCH_WANTS_THP_SWAP too
is enabled. Hence checking for ARCH_WANTS_THP_SWAP again does not make sense
either in the generic fallback stub, or in arm64 platform override. Because
without ARCH_WANTS_THP_SWAP enabled, arch_thp_swp_supported() should never
be called in the first place.

>
>>
>> Best Regards,
>> Huang, Ying
>>
>>> +#endif
>>> +
>>> #endif /* _LINUX_HUGE_MM_H */
>>> diff --git a/mm/swap_slots.c b/mm/swap_slots.c
>>> index 2a65a89b5b4d..10b94d64cc25 100644
>>> --- a/mm/swap_slots.c
>>> +++ b/mm/swap_slots.c
>>> @@ -307,7 +307,7 @@ swp_entry_t folio_alloc_swap(struct folio *folio)
>>> entry.val = 0;
>>>
>>> if (folio_test_large(folio)) {
>>> - if (IS_ENABLED(CONFIG_THP_SWAP))
>>> + if (IS_ENABLED(CONFIG_THP_SWAP) && arch_thp_swp_supported())
>>> get_swap_pages(1, &entry, folio_nr_pages(folio));
>>> goto out;
>>> }
>
> Thanks
> Barry
>

\
 
 \ /
  Last update: 2022-07-19 05:08    [W:0.064 / U:1.688 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site