Messages in this thread | | | Date | Wed, 7 Dec 2022 15:21:56 -0800 | Subject | Re: [PATCH v2 06/10] mm/hugetlb: Make hugetlb_follow_page_mask() safe to pmd unshare | From | John Hubbard <> |
| |
On 12/7/22 12:30, Peter Xu wrote: > Since hugetlb_follow_page_mask() walks the pgtable, it needs the vma lock > to make sure the pgtable page will not be freed concurrently. > > Acked-by: David Hildenbrand <david@redhat.com> > Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> > Signed-off-by: Peter Xu <peterx@redhat.com> > --- > mm/hugetlb.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-)
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
thanks, -- John Hubbard NVIDIA
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 49f73677a418..3fbbd599d015 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -6226,9 +6226,10 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, > if (WARN_ON_ONCE(flags & FOLL_PIN)) > return NULL; > > + hugetlb_vma_lock_read(vma); > pte = huge_pte_offset(mm, haddr, huge_page_size(h)); > if (!pte) > - return NULL; > + goto out_unlock; > > ptl = huge_pte_lock(h, mm, pte); > entry = huge_ptep_get(pte); > @@ -6251,6 +6252,8 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, > } > out: > spin_unlock(ptl); > +out_unlock: > + hugetlb_vma_unlock_read(vma); > return page; > } >
| |