lkml.org 
[lkml]   [2022]   [Dec]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH v2 10/10] mm/hugetlb: Document why page_vma_mapped_walk() is safe to walk
Date
Taking vma lock here is not needed for now because all potential hugetlb
walkers here should have i_mmap_rwsem held. Document the fact.

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
mm/page_vma_mapped.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index e97b2e23bd28..2e59a0419d22 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -168,8 +168,14 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
/* The only possible mapping was handled on last iteration */
if (pvmw->pte)
return not_found(pvmw);
-
- /* when pud is not present, pte will be NULL */
+ /*
+ * NOTE: we don't need explicit lock here to walk the
+ * hugetlb pgtable because either (1) potential callers of
+ * hugetlb pvmw currently holds i_mmap_rwsem, or (2) the
+ * caller will not walk a hugetlb vma (e.g. ksm or uprobe).
+ * When one day this rule breaks, one will get a warning
+ * in hugetlb_walk(), and then we'll figure out what to do.
+ */
pvmw->pte = hugetlb_walk(vma, pvmw->address, size);
if (!pvmw->pte)
return false;
--
2.37.3
\
 
 \ /
  Last update: 2022-12-07 21:34    [W:0.156 / U:4.232 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site