lkml.org 
[lkml]   [2022]   [Jul]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [mm-unstable PATCH v4 3/9] mm/hugetlb: make pud_huge() and follow_huge_pud() aware of non-present pud entry
From
Date
On 2022/7/5 17:04, HORIGUCHI NAOYA(堀口 直也) wrote:
> On Tue, Jul 05, 2022 at 10:46:09AM +0800, Miaohe Lin wrote:
>> On 2022/7/4 9:33, Naoya Horiguchi wrote:
>>> From: Naoya Horiguchi <naoya.horiguchi@nec.com>
>>>
>>> follow_pud_mask() does not support non-present pud entry now. As long as
>>> I tested on x86_64 server, follow_pud_mask() still simply returns
>>> no_page_table() for non-present_pud_entry() due to pud_bad(), so no severe
>>> user-visible effect should happen. But generally we should call
>>> follow_huge_pud() for non-present pud entry for 1GB hugetlb page.
>>>
>>> Update pud_huge() and follow_huge_pud() to handle non-present pud entries.
>>> The changes are similar to previous works for pud entries commit e66f17ff7177
>>> ("mm/hugetlb: take page table lock in follow_huge_pmd()") and commit
>>> cbef8478bee5 ("mm/hugetlb: pmd_huge() returns true for non-present hugepage").
>>>
>>> Signed-off-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
>>> ---
>>> v2 -> v3:
>>> - fixed typos in subject and description,
>>> - added comment on pud_huge(),
>>> - added comment about fallback for hwpoisoned entry,
>>> - updated initial check about FOLL_{PIN,GET} flags.
>>> ---
>>> arch/x86/mm/hugetlbpage.c | 8 +++++++-
>>> mm/hugetlb.c | 32 ++++++++++++++++++++++++++++++--
>>> 2 files changed, 37 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
>>> index 509408da0da1..6b3033845c6d 100644
>>> --- a/arch/x86/mm/hugetlbpage.c
>>> +++ b/arch/x86/mm/hugetlbpage.c
>>> @@ -30,9 +30,15 @@ int pmd_huge(pmd_t pmd)
>>> (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;
>>> }
>>>
>>> +/*
>>> + * pud_huge() returns 1 if @pud is hugetlb related entry, that is normal
>>> + * hugetlb entry or non-present (migration or hwpoisoned) hugetlb entry.
>>> + * Otherwise, returns 0.
>>> + */
>>> int pud_huge(pud_t pud)
>>> {
>>> - return !!(pud_val(pud) & _PAGE_PSE);
>>> + return !pud_none(pud) &&
>>> + (pud_val(pud) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;
>>> }
>>
>> Question: Is aarch64 supported too? It seems aarch64 version of pud_huge matches
>> the requirement naturally for me.
>
> I think that if pmd_huge() and pud_huge() return true for non-present
> pmd/pud entries, that's OK. Otherwise we need update to support the
> new feature.
>
> In aarch64, the bits in pte/pmd/pud related to {pmd,pud}_present() and
> {pmd,pud}_huge() seem not to overlap with the bit range for swap type
> and swap offset, so maybe that's fine. But I recommend to test with
> arm64 if you have access to aarch64 servers.

I see. This series is intended to enable 1GB hugepage support on x86. And if
someone wants to use it in other arches, it's better to have a test first. ;)

Thanks.

>
>>
>> Anyway, this patch looks good to me.
>>
>> Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
>
> Thank you for reviewing.
>
> - Naoya Horiguchi
>

\
 
 \ /
  Last update: 2022-07-06 05:09    [W:0.883 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site