lkml.org 
[lkml]   [2020]   [Nov]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRe: [PATCH v21 17/19] mm/lru: replace pgdat lru_lock with lruvec lock
    Date
    On 11/5/20 9:55 AM, Alex Shi wrote:
    > This patch moves per node lru_lock into lruvec, thus bring a lru_lock for
    > each of memcg per node. So on a large machine, each of memcg don't
    > have to suffer from per node pgdat->lru_lock competition. They could go
    > fast with their self lru_lock.
    >
    > After move memcg charge before lru inserting, page isolation could
    > serialize page's memcg, then per memcg lruvec lock is stable and could
    > replace per node lru lock.
    >
    > In func isolate_migratepages_block, compact_unlock_should_abort and
    > lock_page_lruvec_irqsave are open coded to work with compact_control.
    > Also add a debug func in locking which may give some clues if there are
    > sth out of hands.
    >
    > Daniel Jordan's testing show 62% improvement on modified readtwice case
    > on his 2P * 10 core * 2 HT broadwell box.
    > https://lore.kernel.org/lkml/20200915165807.kpp7uhiw7l3loofu@ca-dmjordan1.us.oracle.com/
    >
    > On a large machine with memcg enabled but not used, the page's lruvec
    > seeking pass a few pointers, that may lead to lru_lock holding time
    > increase and a bit regression.
    >
    > Hugh Dickins helped on the patch polish, thanks!
    >
    > Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com>
    > Acked-by: Hugh Dickins <hughd@google.com>
    > Cc: Rong Chen <rong.a.chen@intel.com>
    > Cc: Hugh Dickins <hughd@google.com>
    > Cc: Andrew Morton <akpm@linux-foundation.org>
    > Cc: Johannes Weiner <hannes@cmpxchg.org>
    > Cc: Michal Hocko <mhocko@kernel.org>
    > Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
    > Cc: Yang Shi <yang.shi@linux.alibaba.com>
    > Cc: Matthew Wilcox <willy@infradead.org>
    > Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
    > Cc: Tejun Heo <tj@kernel.org>
    > Cc: linux-kernel@vger.kernel.org
    > Cc: linux-mm@kvack.org
    > Cc: cgroups@vger.kernel.org

    I think I need some explanation about the rcu_read_lock() usage in
    lock_page_lruvec*() (and places effectively opencoding it).
    Preferably in form of some code comment, but that can be also added as a
    additional patch later, I don't want to block the series.

    mem_cgroup_page_lruvec() comment says

    * This function relies on page->mem_cgroup being stable - see the
    * access rules in commit_charge().

    commit_charge() comment:

    * Any of the following ensures page->mem_cgroup stability:
    *
    * - the page lock
    * - LRU isolation
    * - lock_page_memcg()
    * - exclusive reference

    "LRU isolation" used to be quite clear, but now is it after
    TestClearPageLRU(page) or after deleting from the lru list as well?
    Also it doesn't mention rcu_read_lock(), should it?

    So what exactly are we protecting by rcu_read_lock() in e.g. lock_page_lruvec()?

    rcu_read_lock();
    lruvec = mem_cgroup_page_lruvec(page, pgdat);
    spin_lock(&lruvec->lru_lock);
    rcu_read_unlock();

    Looks like we are protecting the lruvec from going away and it can't go away
    anymore after we take the lru_lock?

    But then e.g. in __munlock_pagevec() we are doing this without an rcu_read_lock():

    new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page));

    where new_lruvec is potentionally not the one that we have locked

    And the last thing mem_cgroup_page_lruvec() is doing is:

    if (unlikely(lruvec->pgdat != pgdat))
    lruvec->pgdat = pgdat;
    return lruvec;

    So without the rcu_read_lock() is this potentionally accessing the pgdat field
    of lruvec that might have just gone away?

    Thanks,
    Vlastimil

    \
     
     \ /
      Last update: 2020-11-12 13:20    [W:4.143 / U:0.452 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site