lkml.org 
[lkml]   [2021]   [Dec]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v4 1/3] mm: protect free_pgtables with mmap_lock write lock in exit_mmap
    On Wed 08-12-21 13:22:09, Suren Baghdasaryan wrote:
    > oom-reaper and process_mrelease system call should protect against
    > races with exit_mmap which can destroy page tables while they
    > walk the VMA tree. oom-reaper protects from that race by setting
    > MMF_OOM_VICTIM and by relying on exit_mmap to set MMF_OOM_SKIP
    > before taking and releasing mmap_write_lock. process_mrelease has
    > to elevate mm->mm_users to prevent such race. Both oom-reaper and
    > process_mrelease hold mmap_read_lock when walking the VMA tree.
    > The locking rules and mechanisms could be simpler if exit_mmap takes
    > mmap_write_lock while executing destructive operations such as
    > free_pgtables.
    > Change exit_mmap to hold the mmap_write_lock when calling
    > free_pgtables and remove_vma. Operations like unmap_vmas and
    > unlock_range are not destructive and could run under mmap_read_lock
    > but for simplicity we take one mmap_write_lock during almost the entire
    > operation.

    unlock_range is not safe to be called under read lock. See 27ae357fa82b
    ("mm, oom: fix concurrent munlock and oom reaper unmap, v3").

    > Note also that because oom-reaper checks VM_LOCKED flag,
    > unlock_range() should not be allowed to race with it.
    > Before this patch, remove_vma used to be called with no locks held,
    > however with fput being executed asynchronously and vm_ops->close
    > not being allowed to hold mmap_lock (it is called from __split_vma
    > with mmap_sem held for write), changing that should be fine.
    > In most cases this lock should be uncontended. Previously, Kirill
    > reported ~4% regression caused by a similar change [1]. We reran the
    > same test and although the individual results are quite noisy, the
    > percentiles show lower regression with 1.6% being the worst case [2].
    > The change allows oom-reaper and process_mrelease to execute safely
    > under mmap_read_lock without worries that exit_mmap might destroy page
    > tables from under them.
    >
    > [1] https://lore.kernel.org/all/20170725141723.ivukwhddk2voyhuc@node.shutemov.name/
    > [2] https://lore.kernel.org/all/CAJuCfpGC9-c9P40x7oy=jy5SphMcd0o0G_6U1-+JAziGKG6dGA@mail.gmail.com/
    >
    > Signed-off-by: Suren Baghdasaryan <surenb@google.com>

    The patch looks good otherwise. Btw. when I was trying to do something
    similar in the past Hugh has noted that we can get rid of the same
    lock&&unlock trick in ksm. Maybe you want to have a look at that as well
    ;)

    Acked-by: Michal Hocko <mhocko@suse.com>

    Thanks!

    > ---
    > changes in v4
    > - Separated comments describing vm_operations_struct::close locking
    > requirements into a separate patch, per Matthew Wilcox
    >
    > mm/mmap.c | 16 ++++++++--------
    > 1 file changed, 8 insertions(+), 8 deletions(-)
    >
    > diff --git a/mm/mmap.c b/mm/mmap.c
    > index bfb0ea164a90..f4e09d390a07 100644
    > --- a/mm/mmap.c
    > +++ b/mm/mmap.c
    > @@ -3142,25 +3142,27 @@ void exit_mmap(struct mm_struct *mm)
    > * to mmu_notifier_release(mm) ensures mmu notifier callbacks in
    > * __oom_reap_task_mm() will not block.
    > *
    > - * This needs to be done before calling munlock_vma_pages_all(),
    > + * This needs to be done before calling unlock_range(),
    > * which clears VM_LOCKED, otherwise the oom reaper cannot
    > * reliably test it.
    > */
    > (void)__oom_reap_task_mm(mm);
    >
    > set_bit(MMF_OOM_SKIP, &mm->flags);
    > - mmap_write_lock(mm);
    > - mmap_write_unlock(mm);
    > }
    >
    > + mmap_write_lock(mm);
    > if (mm->locked_vm)
    > unlock_range(mm->mmap, ULONG_MAX);
    >
    > arch_exit_mmap(mm);
    >
    > vma = mm->mmap;
    > - if (!vma) /* Can happen if dup_mmap() received an OOM */
    > + if (!vma) {
    > + /* Can happen if dup_mmap() received an OOM */
    > + mmap_write_unlock(mm);
    > return;
    > + }
    >
    > lru_add_drain();
    > flush_cache_mm(mm);
    > @@ -3171,16 +3173,14 @@ void exit_mmap(struct mm_struct *mm)
    > free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
    > tlb_finish_mmu(&tlb);
    >
    > - /*
    > - * Walk the list again, actually closing and freeing it,
    > - * with preemption enabled, without holding any MM locks.
    > - */
    > + /* Walk the list again, actually closing and freeing it. */
    > while (vma) {
    > if (vma->vm_flags & VM_ACCOUNT)
    > nr_accounted += vma_pages(vma);
    > vma = remove_vma(vma);
    > cond_resched();
    > }
    > + mmap_write_unlock(mm);
    > vm_unacct_memory(nr_accounted);
    > }
    >
    > --
    > 2.34.1.400.ga245620fadb-goog

    --
    Michal Hocko
    SUSE Labs

    \
     
     \ /
      Last update: 2021-12-09 09:57    [W:2.373 / U:0.028 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site