Messages in this thread Patch in this message | | | From | Ryan Roberts <> | Subject | [PATCH v1] mm: Fix use-after-free for MMU_GATHER_NO_GATHER | Date | Thu, 27 Jul 2023 12:02:24 +0100 |
| |
The recent change to batch-zap anonymous ptes did not take into account that for platforms where MMU_GATHER_NO_GATHER is enabled (e.g. s390), __tlb_remove_page() drops a reference to the page. This means that the folio reference count can drop to zero while still in use (i.e. before folio_remove_rmap_range() is called). This does not happen on other platforms because the actual page freeing is deferred.
Solve this by appropriately getting/putting the folio to guarrantee it does not get freed early.
Given the new need to get/put the folio in the batch path, let's stick to the non-batched path if the folio is not large. In this case batching is not helpful since the batch size is 1.
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Fixes: 904d9713b3b0 ("mm: batch-zap large anonymous folio PTE mappings") Reported-by: Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/linux-mm/20230726161942.GA1123863@dev-arch.thelio-3990X/ ---
Hi Andrew,
This fixes patch 3 in the series at [1], which is currently in mm-unstable. I'm not sure whether you want to take the fix or whether I should re-post the entire series?
Thanks, Ryan
mm/memory.c | 42 +++++++++++++++++++++++++++--------------- 1 file changed, 27 insertions(+), 15 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c index 2130bad76eb1..808f6408a570 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1456,6 +1456,9 @@ static unsigned long try_zap_anon_pte_range(struct mmu_gather *tlb, bool full; int i;
+ /* __tlb_remove_page drops a ref; prevent it going to 0 while in use. */ + folio_get(folio); + for (i = 0; i < nr_pages;) { ptent = ptep_get_and_clear_full(mm, addr, pte, tlb->fullmm); tlb_remove_tlb_entry(tlb, pte, addr); @@ -1476,6 +1479,8 @@ static unsigned long try_zap_anon_pte_range(struct mmu_gather *tlb,
folio_remove_rmap_range(folio, page - i, i, vma);
+ folio_put(folio); + return i; }
@@ -1526,26 +1531,33 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, */ if (page && PageAnon(page)) { struct folio *folio = page_folio(page); - int nr_pages_req, nr_pages;
- nr_pages_req = folio_nr_pages_cont_mapped( - folio, page, pte, addr, end); + if (folio_test_large(folio)) { + int nr_pages_req, nr_pages; + int counter = mm_counter(page);
- nr_pages = try_zap_anon_pte_range(tlb, vma, - folio, page, pte, addr, - nr_pages_req, details); + nr_pages_req = folio_nr_pages_cont_mapped( + folio, page, pte, addr, + end);
- rss[mm_counter(page)] -= nr_pages; - nr_pages--; - pte += nr_pages; - addr += nr_pages << PAGE_SHIFT; + /* folio may be freed on return. */ + nr_pages = try_zap_anon_pte_range( + tlb, vma, folio, page, + pte, addr, nr_pages_req, + details);
- if (unlikely(nr_pages < nr_pages_req)) { - force_flush = 1; - addr += PAGE_SIZE; - break; + rss[counter] -= nr_pages; + nr_pages--; + pte += nr_pages; + addr += nr_pages << PAGE_SHIFT; + + if (unlikely(nr_pages < nr_pages_req)) { + force_flush = 1; + addr += PAGE_SIZE; + break; + } + continue; } - continue; }
ptent = ptep_get_and_clear_full(mm, addr, pte, -- 2.25.1
| |