lkml.org 
[lkml]   [2018]   [Oct]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[RFC PATCH 2/2] mm: add priority threshold to __purge_vmap_area_lazy()
    Date
    commit 763b218ddfaf ("mm: add preempt points into
    __purge_vmap_area_lazy()")

    introduced some preempt points, one of those is making
    an allocation more prioritized.

    Prioritizing an allocation over freeing does not work
    well all the time, i.e. it should be rather a compromise.

    1) Number of lazy pages directly influence on busy list
    length thus on operations like: allocation, lookup, unmap,
    remove, etc.

    2) Under heavy simultaneous allocations/releases there may
    be a situation when memory usage grows too fast hitting
    out_of_memory -> panic.

    Establish a threshold passing which the freeing path is
    prioritized over allocation creating a balance between both.

    Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
    ---
    mm/vmalloc.c | 14 ++++++++------
    1 file changed, 8 insertions(+), 6 deletions(-)

    diff --git a/mm/vmalloc.c b/mm/vmalloc.c
    index a7f257540a05..bbafcff6632b 100644
    --- a/mm/vmalloc.c
    +++ b/mm/vmalloc.c
    @@ -1124,23 +1124,23 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end)
    struct llist_node *valist;
    struct vmap_area *va;
    struct vmap_area *n_va;
    - bool do_free = false;
    + int resched_threshold;

    lockdep_assert_held(&vmap_purge_lock);

    valist = llist_del_all(&vmap_purge_list);
    + if (unlikely(valist == NULL))
    + return false;
    +
    llist_for_each_entry(va, valist, purge_list) {
    if (va->va_start < start)
    start = va->va_start;
    if (va->va_end > end)
    end = va->va_end;
    - do_free = true;
    }

    - if (!do_free)
    - return false;
    -
    flush_tlb_kernel_range(start, end);
    + resched_threshold = (int) lazy_max_pages() << 1;

    spin_lock(&vmap_area_lock);
    llist_for_each_entry_safe(va, n_va, valist, purge_list) {
    @@ -1148,7 +1148,9 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end)

    __free_vmap_area(va);
    atomic_sub(nr, &vmap_lazy_nr);
    - cond_resched_lock(&vmap_area_lock);
    +
    + if (atomic_read(&vmap_lazy_nr) < resched_threshold)
    + cond_resched_lock(&vmap_area_lock);
    }
    spin_unlock(&vmap_area_lock);
    return true;
    --
    2.11.0
    \
     
     \ /
      Last update: 2018-10-19 19:37    [W:3.403 / U:0.300 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site