lkml.org 
[lkml]   [2022]   [Jul]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH v12 52/69] mm/khugepaged: stop using vma linked list
    Date
    From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

    Use vma iterator & find_vma() instead of vma linked list.

    Link: https://lkml.kernel.org/r/20220504011345.662299-37-Liam.Howlett@oracle.com
    Link: https://lkml.kernel.org/r/20220621204632.3370049-53-Liam.Howlett@oracle.com
    Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
    Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: David Howells <dhowells@redhat.com>
    Cc: SeongJae Park <sj@kernel.org>
    Cc: Vlastimil Babka <vbabka@suse.cz>
    Cc: Will Deacon <will@kernel.org>
    Cc: Davidlohr Bueso <dave@stgolabs.net>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    ---
    mm/huge_memory.c | 4 ++--
    mm/khugepaged.c | 11 ++++++++---
    2 files changed, 10 insertions(+), 5 deletions(-)

    diff --git a/mm/huge_memory.c b/mm/huge_memory.c
    index f7248002dad9..f44ffd3bbfae 100644
    --- a/mm/huge_memory.c
    +++ b/mm/huge_memory.c
    @@ -2266,11 +2266,11 @@ void vma_adjust_trans_huge(struct vm_area_struct *vma,
    split_huge_pmd_if_needed(vma, end);

    /*
    - * If we're also updating the vma->vm_next->vm_start,
    + * If we're also updating the next vma vm_start,
    * check if we need to split it.
    */
    if (adjust_next > 0) {
    - struct vm_area_struct *next = vma->vm_next;
    + struct vm_area_struct *next = find_vma(vma->vm_mm, vma->vm_end);
    unsigned long nstart = next->vm_start;
    nstart += adjust_next;
    split_huge_pmd_if_needed(next, nstart);
    diff --git a/mm/khugepaged.c b/mm/khugepaged.c
    index 8dbd68c414d9..637bfecd6bf5 100644
    --- a/mm/khugepaged.c
    +++ b/mm/khugepaged.c
    @@ -2092,10 +2092,12 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
    __releases(&khugepaged_mm_lock)
    __acquires(&khugepaged_mm_lock)
    {
    + struct vma_iterator vmi;
    struct mm_slot *mm_slot;
    struct mm_struct *mm;
    struct vm_area_struct *vma;
    int progress = 0;
    + unsigned long address;

    VM_BUG_ON(!pages);
    lockdep_assert_held(&khugepaged_mm_lock);
    @@ -2119,11 +2121,14 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
    vma = NULL;
    if (unlikely(!mmap_read_trylock(mm)))
    goto breakouterloop_mmap_lock;
    - if (likely(!khugepaged_test_exit(mm)))
    - vma = find_vma(mm, khugepaged_scan.address);

    progress++;
    - for (; vma; vma = vma->vm_next) {
    + if (unlikely(khugepaged_test_exit(mm)))
    + goto breakouterloop;
    +
    + address = khugepaged_scan.address;
    + vma_iter_init(&vmi, mm, address);
    + for_each_vma(vmi, vma) {
    unsigned long hstart, hend;

    cond_resched();
    --
    2.35.1
    \
     
     \ /
      Last update: 2022-07-20 04:25    [W:2.197 / U:0.040 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site