lkml.org 
[lkml]   [2022]   [Jul]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH v12 41/69] exec: use VMA iterator instead of linked list
    Date
    From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

    Remove a use of the vm_next list by doing the initial lookup with the VMA
    iterator and then using it to find the next entry.

    Link: https://lkml.kernel.org/r/20220504011345.662299-26-Liam.Howlett@oracle.com
    Link: https://lkml.kernel.org/r/20220621204632.3370049-42-Liam.Howlett@oracle.com
    Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
    Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: David Howells <dhowells@redhat.com>
    Cc: SeongJae Park <sj@kernel.org>
    Cc: Vlastimil Babka <vbabka@suse.cz>
    Cc: Will Deacon <will@kernel.org>
    Cc: Davidlohr Bueso <dave@stgolabs.net>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    ---
    fs/exec.c | 9 ++++++---
    1 file changed, 6 insertions(+), 3 deletions(-)

    diff --git a/fs/exec.c b/fs/exec.c
    index b97afa682ffe..9843cecd031a 100644
    --- a/fs/exec.c
    +++ b/fs/exec.c
    @@ -686,6 +686,8 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift)
    unsigned long length = old_end - old_start;
    unsigned long new_start = old_start - shift;
    unsigned long new_end = old_end - shift;
    + VMA_ITERATOR(vmi, mm, new_start);
    + struct vm_area_struct *next;
    struct mmu_gather tlb;

    BUG_ON(new_start > new_end);
    @@ -694,7 +696,7 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift)
    * ensure there are no vmas between where we want to go
    * and where we are
    */
    - if (vma != find_vma(mm, new_start))
    + if (vma != vma_next(&vmi))
    return -EFAULT;

    /*
    @@ -713,12 +715,13 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift)

    lru_add_drain();
    tlb_gather_mmu(&tlb, mm);
    + next = vma_next(&vmi);
    if (new_end > old_start) {
    /*
    * when the old and new regions overlap clear from new_end.
    */
    free_pgd_range(&tlb, new_end, old_end, new_end,
    - vma->vm_next ? vma->vm_next->vm_start : USER_PGTABLES_CEILING);
    + next ? next->vm_start : USER_PGTABLES_CEILING);
    } else {
    /*
    * otherwise, clean from old_start; this is done to not touch
    @@ -727,7 +730,7 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift)
    * for the others its just a little faster.
    */
    free_pgd_range(&tlb, old_start, old_end, new_end,
    - vma->vm_next ? vma->vm_next->vm_start : USER_PGTABLES_CEILING);
    + next ? next->vm_start : USER_PGTABLES_CEILING);
    }
    tlb_finish_mmu(&tlb);

    --
    2.35.1
    \
     
     \ /
      Last update: 2022-07-20 04:23    [W:4.198 / U:0.048 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site