lkml.org 
[lkml]   [2022]   [Jun]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v1 5/5] mm, hwpoison: enable memory error handling on 1GB hugepage
From
Date
On 2022/6/9 16:45, HORIGUCHI NAOYA(堀口 直也) wrote:
> On Wed, Jun 08, 2022 at 08:57:24PM +0800, Miaohe Lin wrote:
> ...
>>>
>>> I think that most of page table walker for user address space should first
>>> check is_vm_hugetlb_page() and call hugetlb specific walking code for vma
>>> with VM_HUGETLB.
>>> copy_page_range() is a good example. It calls copy_hugetlb_page_range()
>>> for vma with VM_HUGETLB and the function should support hwpoison entry.
>>> But I feel that I need testing for confirmation.
>>
>> Sorry, I missed it should be called from hugetlb variants.
>>
>>>
>>> And I'm not sure that all other are prepared for non-present pud-mapping,
>>> so I'll need somehow code inspection and testing for each.
>>
>> I browsed the code again, there still might be some problematic code paths:
>>
>> 1.For follow_pud_mask, !pud_present will mostly reach follow_pmd_mask(). This can be
>> called for hugetlb page. (Note gup_pud_range is fixed at 15494520b776 ("mm: fix gup_pud_range"))
>>
>> 2.Even for huge_pte_alloc, pud_offset will be called in pud_alloc. So pudp will be an invalid pointer.
>> And it will be de-referenced later.
>
> Yes, these paths need to support non-present pud entry, so I'll update/add
> the patches. It seems that I did the similar work for pmd few years ago
> (cbef8478bee5 ("mm/hugetlb: pmd_huge() returns true for non-present hugepage").

Yes, these should be similar work. Thanks for your hard work. :)

>
> Thanks,
> Naoya Horiguchi
>

\
 
 \ /
  Last update: 2022-09-17 16:25    [W:0.078 / U:1.052 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site