lkml.org 
[lkml]   [2022]   [Nov]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patches in this message
/
Date
From
SubjectRe: [PATCH 3/5] userfualtfd: Replace lru_cache functions with folio_add functions
On Tue, Nov 01, 2022 at 06:31:26PM +0000, Matthew Wilcox wrote:
> On Tue, Nov 01, 2022 at 10:53:24AM -0700, Vishal Moola (Oracle) wrote:
> > Replaces lru_cache_add() and lru_cache_add_inactive_or_unevictable()
> > with folio_add_lru() and folio_add_lru_vma(). This is in preparation for
> > the removal of lru_cache_add().
>
> Ummmmm. Reviewing this patch reveals a bug (not introduced by your
> patch). Look:
>
> mfill_atomic_install_pte:
> bool page_in_cache = page->mapping;
>
> mcontinue_atomic_pte:
> ret = shmem_get_folio(inode, pgoff, &folio, SGP_NOALLOC);
> ...
> page = folio_file_page(folio, pgoff);
> ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr,
> page, false, wp_copy);
>
> That says pretty plainly that mfill_atomic_install_pte() can be passed
> a tail page from shmem, and if it is ...
>
> if (page_in_cache) {
> ...
> } else {
> page_add_new_anon_rmap(page, dst_vma, dst_addr);
> lru_cache_add_inactive_or_unevictable(page, dst_vma);
> }
>
> it'll get put on the rmap as an anon page!

Hmm yeah.. thanks Matthew!

Does the patch attached look reasonable to you?

Copying Axel too.

>
> > Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> > ---
> > mm/userfaultfd.c | 6 ++++--
> > 1 file changed, 4 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> > index e24e8a47ce8a..2560973b00d8 100644
> > --- a/mm/userfaultfd.c
> > +++ b/mm/userfaultfd.c
> > @@ -66,6 +66,7 @@ int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd,
> > bool vm_shared = dst_vma->vm_flags & VM_SHARED;
> > bool page_in_cache = page->mapping;
> > spinlock_t *ptl;
> > + struct folio *folio;
> > struct inode *inode;
> > pgoff_t offset, max_off;
> >
> > @@ -113,14 +114,15 @@ int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd,
> > if (!pte_none_mostly(*dst_pte))
> > goto out_unlock;
> >
> > + folio = page_folio(page);
> > if (page_in_cache) {
> > /* Usually, cache pages are already added to LRU */
> > if (newly_allocated)
> > - lru_cache_add(page);
> > + folio_add_lru(folio);
> > page_add_file_rmap(page, dst_vma, false);
> > } else {
> > page_add_new_anon_rmap(page, dst_vma, dst_addr);
> > - lru_cache_add_inactive_or_unevictable(page, dst_vma);
> > + folio_add_lru_vma(folio, dst_vma);
> > }
> >
> > /*
> > --
> > 2.38.1
> >
> >
>

--
Peter Xu
From 4eea0908b4890745bedd931283c1af91f509d039 Mon Sep 17 00:00:00 2001
From: Peter Xu <peterx@redhat.com>
Date: Wed, 2 Nov 2022 14:41:52 -0400
Subject: [PATCH] mm/shmem: Use page_mapping() to detect page cache for uffd
continue
Content-type: text/plain

mfill_atomic_install_pte() checks page->mapping to detect whether one page
is used in the page cache. However as pointed out by Matthew, the page can
logically be a tail page rather than always the head in the case of uffd
minor mode with UFFDIO_CONTINUE. It means we could wrongly install one pte
with shmem thp tail page assuming it's an anonymous page.

It's not that clear even for anonymous page, since normally anonymous pages
also have page->mapping being setup with the anon vma. It's safe here only
because the only such caller to mfill_atomic_install_pte() is always
passing in a newly allocated page (mcopy_atomic_pte()), whose page->mapping
is not yet setup. However that's not extremely obvious either.

For either of above, use page_mapping() instead.

And this should be stable material.

Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Axel Rasmussen <axelrasmussen@google.com>
Cc: stable@vger.kernel.org
Reported-by: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
mm/userfaultfd.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 3d0fef3980b3..650ab6cfd5f4 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -64,7 +64,7 @@ int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd,
pte_t _dst_pte, *dst_pte;
bool writable = dst_vma->vm_flags & VM_WRITE;
bool vm_shared = dst_vma->vm_flags & VM_SHARED;
- bool page_in_cache = page->mapping;
+ bool page_in_cache = page_mapping(page);
spinlock_t *ptl;
struct inode *inode;
pgoff_t offset, max_off;
--
2.37.3
\
 
 \ /
  Last update: 2022-11-02 20:04    [W:0.129 / U:0.240 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site