lkml.org 
[lkml]   [2020]   [Dec]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH 1/2] mm: Allow architectures to request 'old' entries when prefaulting
On Sun, Dec 27, 2020 at 10:43:44PM -0800, Hugh Dickins wrote:
> On Sun, 27 Dec 2020, Linus Torvalds wrote:
> > On Sun, Dec 27, 2020 at 3:48 PM Kirill A. Shutemov <kirill@shutemov.name> wrote:
> > >
> > > I did what Hugh proposed and it got clear to my eyes. It gets somewhat
> > > large, but take a look.
> >
> > Ok, it's not that much bigger, and the end result is certainly much
> > clearer wrt locking.
> >
> > So that last version of yours with the fix for the uninitialized 'ret'
> > variable looks good to me.
> >
> > Of course, I've said that before, and have been wrong. So ...
>
> And guess what... it's broken.
>
> I folded it into testing rc1: segfault on cc1, systemd
> "Journal file corrupted, rotating", seen on more than one machine.
>
> I've backed it out, rc1 itself seems fine, I'll leave rc1 under
> load overnight, then come back to the faultaround patch tomorrow;
> won't glance at it tonight, but maybe Kirill will guess what's wrong.

So far I only found one more pin leak and always-true check. I don't see
how can it lead to crash or corruption. Keep looking.

diff --git a/mm/filemap.c b/mm/filemap.c
index c5da09f3f363..87671284de62 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2966,8 +2966,7 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, unsigned long address,
mmap_miss--;

vmf->address += (xas.xa_index - last_pgoff) << PAGE_SHIFT;
- if (vmf->pte)
- vmf->pte += xas.xa_index - last_pgoff;
+ vmf->pte += xas.xa_index - last_pgoff;
last_pgoff = xas.xa_index;

if (!pte_none(*vmf->pte))
diff --git a/mm/memory.c b/mm/memory.c
index e51638b92e7c..829f5056dd1c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3785,13 +3785,16 @@ vm_fault_t finish_fault(struct vm_fault *vmf)

vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
vmf->address, &vmf->ptl);
+ ret = 0;
/* Re-check under ptl */
if (likely(pte_none(*vmf->pte)))
do_set_pte(vmf, page);
+ else
+ ret = VM_FAULT_NOPAGE;

update_mmu_tlb(vma, vmf->address, vmf->pte);
pte_unmap_unlock(vmf->pte, vmf->ptl);
- return 0;
+ return ret;
}

static unsigned long fault_around_bytes __read_mostly =
--
Kirill A. Shutemov
\
 
 \ /
  Last update: 2020-12-28 13:55    [W:2.010 / U:0.004 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site