Messages in this thread | | | Date | Mon, 5 Dec 2022 15:52:51 -0800 | Subject | Re: [PATCH 08/10] mm/hugetlb: Make walk_hugetlb_range() safe to pmd unshare | From | John Hubbard <> |
| |
On 12/5/22 15:33, Mike Kravetz wrote: > On 11/29/22 14:35, Peter Xu wrote: >> Since walk_hugetlb_range() walks the pgtable, it needs the vma lock >> to make sure the pgtable page will not be freed concurrently. >> >> Signed-off-by: Peter Xu <peterx@redhat.com> >> --- >> mm/pagewalk.c | 2 ++ >> 1 file changed, 2 insertions(+) >> >> diff --git a/mm/pagewalk.c b/mm/pagewalk.c >> index 7f1c9b274906..d98564a7be57 100644 >> --- a/mm/pagewalk.c >> +++ b/mm/pagewalk.c >> @@ -302,6 +302,7 @@ static int walk_hugetlb_range(unsigned long addr, unsigned long end, >> const struct mm_walk_ops *ops = walk->ops; >> int err = 0; >> >> + hugetlb_vma_lock_read(vma); >> do { >> next = hugetlb_entry_end(h, addr, end); >> pte = huge_pte_offset(walk->mm, addr & hmask, sz); > > For each found pte, we will be calling mm_walk_ops->hugetlb_entry() with > the vma_lock held. I looked into the various hugetlb_entry routines, and > I am not sure about hmm_vma_walk_hugetlb_entry. It seems like it could > possibly call hmm_vma_fault -> handle_mm_fault -> hugetlb_fault. If this > can happen, then we may have an issue as hugetlb_fault will also need to > acquire the vma_lock in read mode. > > I do not know the hmm code well enough to know if this may be an actual > issue?
Oh, this sounds like a serious concern. If we add a new lock, and hold it during callbacks that also need to take it, that's not going to work out, right?
And yes, hmm_range_fault() and related things do a good job of revealing this kind of deadlock. :)
thanks, -- John Hubbard NVIDIA
| |