lkml.org 
[lkml]   [2022]   [Oct]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH] hugetlbfs: don't delete error page from pagecache
On Wed, Oct 19, 2022 at 11:31 AM Yang Shi <shy828301@gmail.com> wrote:
>
> On Tue, Oct 18, 2022 at 1:01 PM James Houghton <jthoughton@google.com> wrote:
> >
> > This change is very similar to the change that was made for shmem [1],
> > and it solves the same problem but for HugeTLBFS instead.
> >
> > Currently, when poison is found in a HugeTLB page, the page is removed
> > from the page cache. That means that attempting to map or read that
> > hugepage in the future will result in a new hugepage being allocated
> > instead of notifying the user that the page was poisoned. As [1] states,
> > this is effectively memory corruption.
> >
> > The fix is to leave the page in the page cache. If the user attempts to
> > use a poisoned HugeTLB page with a syscall, the syscall will fail with
> > EIO, the same error code that shmem uses. For attempts to map the page,
> > the thread will get a BUS_MCEERR_AR SIGBUS.
> >
> > [1]: commit a76054266661 ("mm: shmem: don't truncate page if memory failure happens")
> >
> > Signed-off-by: James Houghton <jthoughton@google.com>
>
> Thanks for the patch. Yes, we should do the same thing for hugetlbfs.
> When I was working on shmem I did look into hugetlbfs too. But the
> problem is we actually make the whole hugetlb page unavailable even
> though just one 4K sub page is hwpoisoned. It may be fine to 2M
> hugetlb page, but a lot of memory may be a huge waste for 1G hugetlb
> page, particular for the page fault path.

Right -- it is wasted until a hole is punched or the file is
truncated. Although we're wasting the rest of the hugepage for a
little longer with this patch, I think it's worth it to have correct
behavior.

>
> So I discussed this with Mike offline last year, and I was told Google
> was working on PTE mapped hugetlb page. That should be able to solve
> the problem. And we'd like to have the high-granularity hugetlb
> mapping support as the predecessor.
>
> There were some other details, but I can't remember all of them, I
> have to refresh my memory by rereading the email discussions...

Yes! I am working on this. :) I will send up a series in the coming
weeks that implements basic support for high-granularity mapping
(HGM). This patch is required for hwpoison semantics to work properly
for high-granularity mapping (and, as the patch states, for shared
HugeTLB mappings generally). For HGM, if we partially map a hugepage
and find poison, faulting on the unmapped bits of it will allocate a
new hugepage. By keeping the poisoned page in the pagecache, we
correctly give userspace a SIGBUS. I didn't mention this in the commit
description because I think this patch is correct on its own.

I haven't implemented PAGE_SIZE poisoning of HugeTLB pages yet, but
high-granularity mapping unblocks this work. Hopefully that will be
ready in the coming months. :)

- James Houghton

>
> > ---
> > fs/hugetlbfs/inode.c | 13 ++++++-------
> > mm/hugetlb.c | 4 ++++
> > mm/memory-failure.c | 5 ++++-
> > 3 files changed, 14 insertions(+), 8 deletions(-)
> >
> > diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
> > index fef5165b73a5..7f836f8f9db1 100644
> > --- a/fs/hugetlbfs/inode.c
> > +++ b/fs/hugetlbfs/inode.c
> > @@ -328,6 +328,12 @@ static ssize_t hugetlbfs_read_iter(struct kiocb *iocb, struct iov_iter *to)
> > } else {
> > unlock_page(page);
> >
> > + if (PageHWPoison(page)) {
> > + put_page(page);
> > + retval = -EIO;
> > + break;
> > + }
> > +
> > /*
> > * We have the page, copy it to user space buffer.
> > */
> > @@ -1111,13 +1117,6 @@ static int hugetlbfs_migrate_folio(struct address_space *mapping,
> > static int hugetlbfs_error_remove_page(struct address_space *mapping,
> > struct page *page)
> > {
> > - struct inode *inode = mapping->host;
> > - pgoff_t index = page->index;
> > -
> > - hugetlb_delete_from_page_cache(page_folio(page));
> > - if (unlikely(hugetlb_unreserve_pages(inode, index, index + 1, 1)))
> > - hugetlb_fix_reserve_counts(inode);
> > -
> > return 0;
> > }
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 97896165fd3f..5120a9ccbf5b 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -6101,6 +6101,10 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm,
> >
> > ptl = huge_pte_lock(h, dst_mm, dst_pte);
> >
> > + ret = -EIO;
> > + if (PageHWPoison(page))
> > + goto out_release_unlock;
> > +
> > /*
> > * We allow to overwrite a pte marker: consider when both MISSING|WP
> > * registered, we firstly wr-protect a none pte which has no page cache
> > diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> > index 145bb561ddb3..bead6bccc7f2 100644
> > --- a/mm/memory-failure.c
> > +++ b/mm/memory-failure.c
> > @@ -1080,6 +1080,7 @@ static int me_huge_page(struct page_state *ps, struct page *p)
> > int res;
> > struct page *hpage = compound_head(p);
> > struct address_space *mapping;
> > + bool extra_pins = false;
> >
> > if (!PageHuge(hpage))
> > return MF_DELAYED;
> > @@ -1087,6 +1088,8 @@ static int me_huge_page(struct page_state *ps, struct page *p)
> > mapping = page_mapping(hpage);
> > if (mapping) {
> > res = truncate_error_page(hpage, page_to_pfn(p), mapping);
> > + /* The page is kept in page cache. */
> > + extra_pins = true;
> > unlock_page(hpage);
> > } else {
> > unlock_page(hpage);
> > @@ -1104,7 +1107,7 @@ static int me_huge_page(struct page_state *ps, struct page *p)
> > }
> > }
> >
> > - if (has_extra_refcount(ps, p, false))
> > + if (has_extra_refcount(ps, p, extra_pins))
> > res = MF_FAILED;
> >
> > return res;
> > --
> > 2.38.0.413.g74048e4d9e-goog
> >

\
 
 \ /
  Last update: 2022-10-19 20:43    [W:0.169 / U:0.184 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site