Messages in this thread | | | Date | Wed, 30 Nov 2022 17:39:50 +0100 | Subject | Re: [PATCH 09/10] mm/hugetlb: Make page_vma_mapped_walk() safe to pmd unshare | From | David Hildenbrand <> |
| |
On 30.11.22 17:32, Peter Xu wrote: > On Wed, Nov 30, 2022 at 05:18:45PM +0100, David Hildenbrand wrote: >> On 29.11.22 20:35, Peter Xu wrote: >>> Since page_vma_mapped_walk() walks the pgtable, it needs the vma lock >>> to make sure the pgtable page will not be freed concurrently. >>> >>> Signed-off-by: Peter Xu <peterx@redhat.com> >>> --- >>> include/linux/rmap.h | 4 ++++ >>> mm/page_vma_mapped.c | 5 ++++- >>> 2 files changed, 8 insertions(+), 1 deletion(-) >>> >>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h >>> index bd3504d11b15..a50d18bb86aa 100644 >>> --- a/include/linux/rmap.h >>> +++ b/include/linux/rmap.h >>> @@ -13,6 +13,7 @@ >>> #include <linux/highmem.h> >>> #include <linux/pagemap.h> >>> #include <linux/memremap.h> >>> +#include <linux/hugetlb.h> >>> /* >>> * The anon_vma heads a list of private "related" vmas, to scan if >>> @@ -408,6 +409,9 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw) >>> pte_unmap(pvmw->pte); >>> if (pvmw->ptl) >>> spin_unlock(pvmw->ptl); >>> + /* This needs to be after unlock of the spinlock */ >>> + if (is_vm_hugetlb_page(pvmw->vma)) >>> + hugetlb_vma_unlock_read(pvmw->vma); >>> } >>> bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw); >>> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c >>> index 93e13fc17d3c..f94ec78b54ff 100644 >>> --- a/mm/page_vma_mapped.c >>> +++ b/mm/page_vma_mapped.c >>> @@ -169,10 +169,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) >>> if (pvmw->pte) >>> return not_found(pvmw); >>> + hugetlb_vma_lock_read(vma); >>> /* when pud is not present, pte will be NULL */ >>> pvmw->pte = huge_pte_offset(mm, pvmw->address, size); >>> - if (!pvmw->pte) >>> + if (!pvmw->pte) { >>> + hugetlb_vma_unlock_read(vma); >>> return false; >>> + } >>> pvmw->ptl = huge_pte_lock(hstate, mm, pvmw->pte); >>> if (!check_pte(pvmw)) >> >> Looking at code like mm/damon/paddr.c:__damon_pa_mkold() and reading the >> doc of page_vma_mapped_walk(), this might be broken. >> >> Can't we get page_vma_mapped_walk() called multiple times? > > Yes it normally can, but not for hugetlbfs? Feel free to check: > > if (unlikely(is_vm_hugetlb_page(vma))) { > ... > /* The only possible mapping was handled on last iteration */ > if (pvmw->pte) > return not_found(pvmw); > }
Ah, I see, thanks.
Acked-by: David Hildenbrand <david@redhat.com>
-- Thanks,
David / dhildenb
| |