lkml.org 
[lkml]   [2020]   [Nov]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH next] mm/swap.c: reduce lock contention in lru_cache_add
    On Fri, 20 Nov 2020 16:27:27 +0800 Alex Shi <alex.shi@linux.alibaba.com> wrote:

    > The current relock logical will change lru_lock when found a new
    > lruvec, so if 2 memcgs are reading file or alloc page at same time,
    > they could hold the lru_lock alternately, and wait for each other for
    > fairness attribute of ticket spin lock.
    >
    > This patch will sort that all lru_locks and only hold them once in
    > above scenario. That could reduce fairness waiting for lock reget.
    > Than, vm-scalability/case-lru-file-readtwice could get ~5% performance
    > gain on my 2P*20core*HT machine.

    But what happens when all or most of the pages belong to the same
    lruvec? This sounds like the common case - won't it suffer?

    \
     
     \ /
      Last update: 2020-11-21 00:20    [W:4.723 / U:0.072 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site