lkml.org 
[lkml]   [2018]   [Jan]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [mm 4.15-rc8] Random oopses under memory pressure.
On Thu, Jan 18, 2018 at 06:45:00AM -0800, Dave Hansen wrote:
> On 01/18/2018 04:25 AM, Kirill A. Shutemov wrote:
> > [ 10.084024] diff: -858690919
> > [ 10.084258] hpage_nr_pages: 1
> > [ 10.084386] check1: 0
> > [ 10.084478] check2: 0
> ...
> > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> > index d22b84310f6d..57b4397f1ea5 100644
> > --- a/mm/page_vma_mapped.c
> > +++ b/mm/page_vma_mapped.c
> > @@ -70,6 +70,14 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw)
> > }
> > if (pte_page(*pvmw->pte) < pvmw->page)
> > return false;
> > +
> > + if (pte_page(*pvmw->pte) - pvmw->page) {
> > + printk("diff: %d\n", pte_page(*pvmw->pte) - pvmw->page);
> > + printk("hpage_nr_pages: %d\n", hpage_nr_pages(pvmw->page));
> > + printk("check1: %d\n", pte_page(*pvmw->pte) - pvmw->page < 0);
> > + printk("check2: %d\n", pte_page(*pvmw->pte) - pvmw->page >= hpage_nr_pages(pvmw->page));
> > + BUG();
> > + }
>
> This says that pte_page(*pvmw->pte) and pvmw->page are roughly 4GB away
> from each other (858690919*4=0xccba559c0). That's not the compiler
> being wonky, it just means that the virtual addresses of the memory
> sections are that far apart.
>
> This won't happen when you have vmemmap or flatmem because the mem_map[]
> is virtually contiguous and pointer arithmetic just works against all
> 'struct page' pointers. But with classic sparsemem, it doesn't.
>
> You need to make sure that the PFNs are in the same section before you
> can do the math that you want to do here.

Something like this?


From 251e124630da82482e8b320c73162ce89af04d5d Mon Sep 17 00:00:00 2001
From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Date: Thu, 18 Jan 2018 18:24:07 +0300
Subject: [PATCH] mm, page_vma_mapped: Fix pointer arithmetics in check_pte()

Tetsuo reported random crashes under memory pressure on 32-bit x86
system and tracked down to change that introduced
page_vma_mapped_walk().

The root cause of the issue is the faulty pointer math in check_pte().
As ->pte may point to an arbitrary page we have to check that they are
belong to the section before doing math. Otherwise it may lead to weird
results.

It wasn't noticed until now as mem_map[] is virtually contiguous on flatmem or
vmemmap sparsemem. Pointer arithmetic just works against all 'struct page'
pointers. But with classic sparsemem, it doesn't.

Let's restructure code a bit and add necessary check.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reported-by: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
Fixes: ace71a19cec5 ("mm: introduce page_vma_mapped_walk()")
Cc: stable@vger.kernel.org
---
mm/page_vma_mapped.c | 66 +++++++++++++++++++++++++++++++++++-----------------
1 file changed, 45 insertions(+), 21 deletions(-)

diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index d22b84310f6d..de195dcdfbd8 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -30,8 +30,28 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw)
return true;
}

+/**
+ * check_pte - check if @pvmw->page is mapped at the @pvmw->pte
+ *
+ * page_vma_mapped_walk() found a place where @pvmw->page is *potentially*
+ * mapped. check_pte() has to validate this.
+ *
+ * @pvmw->pte may point to empty PTE, swap PTE or PTE pointing to arbitrary
+ * page.
+ *
+ * If PVMW_MIGRATION flag is set, returns true if @pvmw->pte contains migration
+ * entry that points to @pvmw->page or any subpage in case of THP.
+ *
+ * If PVMW_MIGRATION flag is not set, returns true if @pvmw->pte points to
+ * @pvmw->page or any subpage in case of THP.
+ *
+ * Otherwise, return false.
+ *
+ */
static bool check_pte(struct page_vma_mapped_walk *pvmw)
{
+ struct page *page;
+
if (pvmw->flags & PVMW_MIGRATION) {
#ifdef CONFIG_MIGRATION
swp_entry_t entry;
@@ -41,37 +61,41 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw)

if (!is_migration_entry(entry))
return false;
- if (migration_entry_to_page(entry) - pvmw->page >=
- hpage_nr_pages(pvmw->page)) {
- return false;
- }
- if (migration_entry_to_page(entry) < pvmw->page)
- return false;
+
+ page = migration_entry_to_page(entry);
#else
WARN_ON_ONCE(1);
#endif
- } else {
- if (is_swap_pte(*pvmw->pte)) {
- swp_entry_t entry;
+ } else if (is_swap_pte(*pvmw->pte)) {
+ swp_entry_t entry;

- entry = pte_to_swp_entry(*pvmw->pte);
- if (is_device_private_entry(entry) &&
- device_private_entry_to_page(entry) == pvmw->page)
- return true;
- }
+ /* Handle un-addressable ZONE_DEVICE memory */
+ entry = pte_to_swp_entry(*pvmw->pte);
+ if (!is_device_private_entry(entry))
+ return false;

+ page = device_private_entry_to_page(entry);
+ } else {
if (!pte_present(*pvmw->pte))
return false;

- /* THP can be referenced by any subpage */
- if (pte_page(*pvmw->pte) - pvmw->page >=
- hpage_nr_pages(pvmw->page)) {
- return false;
- }
- if (pte_page(*pvmw->pte) < pvmw->page)
- return false;
+ page = pte_page(*pvmw->pte);
}

+ /*
+ * Make sure that pages are in the same section before doing pointer
+ * arithmetics.
+ */
+ if (page_to_section(pvmw->page) != page_to_section(page))
+ return false;
+
+ if (page < pvmw->page)
+ return false;
+
+ /* THP can be referenced by any subpage */
+ if (page - pvmw->page >= hpage_nr_pages(pvmw->page))
+ return false;
+
return true;
}

--
Kirill A. Shutemov
\
 
 \ /
  Last update: 2018-01-18 17:07    [W:0.128 / U:1.216 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site