lkml.org 
[lkml]   [2018]   [Mar]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH] ANDROID: binder: change down_write to down_read
    Date
    binder_update_page_range needs down_write of mmap_sem because
    vm_insert_page need to change vma->vm_flags to VM_MIXEDMAP unless
    it is set. However, when I profile binder working, it seems
    every binder buffers should be mapped in advance by binder_mmap.
    It means we could set VM_MIXEDMAP in bider_mmap time which is
    already hold a mmap_sem as down_write so binder_update_page_range
    doesn't need to hold a mmap_sem as down_write.

    Android suffers from mmap_sem contention so let's reduce mmap_sem
    down_write.

    Cc: Martijn Coenen <maco@android.com>
    Cc: Todd Kjos <tkjos@google.com>
    Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Signed-off-by: Minchan Kim <minchan@kernel.org>
    ---
    drivers/android/binder.c | 2 +-
    drivers/android/binder_alloc.c | 8 +++++---
    2 files changed, 6 insertions(+), 4 deletions(-)

    diff --git a/drivers/android/binder.c b/drivers/android/binder.c
    index 764b63a5aade..9a14c6dd60c4 100644
    --- a/drivers/android/binder.c
    +++ b/drivers/android/binder.c
    @@ -4722,7 +4722,7 @@ static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
    failure_string = "bad vm_flags";
    goto err_bad_arg;
    }
    - vma->vm_flags = (vma->vm_flags | VM_DONTCOPY) & ~VM_MAYWRITE;
    + vma->vm_flags |= (VM_DONTCOPY | VM_MIXEDMAP) & ~VM_MAYWRITE;
    vma->vm_ops = &binder_vm_ops;
    vma->vm_private_data = proc;

    diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
    index 5a426c877dfb..a184bf12eb15 100644
    --- a/drivers/android/binder_alloc.c
    +++ b/drivers/android/binder_alloc.c
    @@ -219,7 +219,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
    mm = alloc->vma_vm_mm;

    if (mm) {
    - down_write(&mm->mmap_sem);
    + down_read(&mm->mmap_sem);
    vma = alloc->vma;
    }

    @@ -229,6 +229,8 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
    goto err_no_vma;
    }

    + WARN_ON_ONCE(vma && !(vma->vm_flags & VM_MIXEDMAP));
    +
    for (page_addr = start; page_addr < end; page_addr += PAGE_SIZE) {
    int ret;
    bool on_lru;
    @@ -288,7 +290,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
    /* vm_insert_page does not seem to increment the refcount */
    }
    if (mm) {
    - up_write(&mm->mmap_sem);
    + up_read(&mm->mmap_sem);
    mmput(mm);
    }
    return 0;
    @@ -321,7 +323,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
    }
    err_no_vma:
    if (mm) {
    - up_write(&mm->mmap_sem);
    + up_read(&mm->mmap_sem);
    mmput(mm);
    }
    return vma ? -ENOMEM : -ESRCH;
    --
    2.17.0.rc1.321.gba9d0f2565-goog
    \
     
     \ /
      Last update: 2018-03-28 04:45    [W:3.744 / U:0.004 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site