Messages in this thread | | | Date | Wed, 7 Dec 2022 16:16:11 -0800 | Subject | Re: [PATCH v2 10/10] mm/hugetlb: Document why page_vma_mapped_walk() is safe to walk | From | John Hubbard <> |
| |
On 12/7/22 12:31, Peter Xu wrote: > Taking vma lock here is not needed for now because all potential hugetlb > walkers here should have i_mmap_rwsem held. Document the fact. > > Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> > Signed-off-by: Peter Xu <peterx@redhat.com> > --- > mm/page_vma_mapped.c | 10 ++++++++-- > 1 file changed, 8 insertions(+), 2 deletions(-) > > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c > index e97b2e23bd28..2e59a0419d22 100644 > --- a/mm/page_vma_mapped.c > +++ b/mm/page_vma_mapped.c > @@ -168,8 +168,14 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > /* The only possible mapping was handled on last iteration */ > if (pvmw->pte) > return not_found(pvmw); > - > - /* when pud is not present, pte will be NULL */ > + /* > + * NOTE: we don't need explicit lock here to walk the > + * hugetlb pgtable because either (1) potential callers of > + * hugetlb pvmw currently holds i_mmap_rwsem, or (2) the > + * caller will not walk a hugetlb vma (e.g. ksm or uprobe). > + * When one day this rule breaks, one will get a warning > + * in hugetlb_walk(), and then we'll figure out what to do. > + */
Confused. Is this documentation actually intended to refer to hugetlb_walk() itself, or just this call site? If the former, then let's move it over to be right before hugetlb_walk().
> pvmw->pte = hugetlb_walk(vma, pvmw->address, size); > if (!pvmw->pte) > return false;
thanks, -- John Hubbard NVIDIA
| |