lkml.org 
[lkml]   [2020]   [Nov]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH next] mm/swap.c: reduce lock contention in lru_cache_add
Date
The current relock logical will change lru_lock when found a new
lruvec, so if 2 memcgs are reading file or alloc page at same time,
they could hold the lru_lock alternately, and wait for each other for
fairness attribute of ticket spin lock.

This patch will sort that all lru_locks and only hold them once in
above scenario. That could reduce fairness waiting for lock reget.
Than, vm-scalability/case-lru-file-readtwice could get ~5% performance
gain on my 2P*20core*HT machine.

Suggested-by: Konstantin Khlebnikov <koct9i@gmail.com>
Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Yu Zhao <yuzhao@google.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---
mm/swap.c | 57 +++++++++++++++++++++++++++++++++++++++++++++++--------
1 file changed, 49 insertions(+), 8 deletions(-)

diff --git a/mm/swap.c b/mm/swap.c
index 490553f3f9ef..c787b38bf9c0 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -1009,24 +1009,65 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec)
trace_mm_lru_insertion(page, lru);
}

+struct lruvecs {
+ struct list_head lists[PAGEVEC_SIZE];
+ struct lruvec *vecs[PAGEVEC_SIZE];
+};
+
+/* Sort pvec pages on their lruvec */
+int sort_page_lruvec(struct lruvecs *lruvecs, struct pagevec *pvec)
+{
+ int i, j, nr_lruvec;
+ struct page *page;
+ struct lruvec *lruvec = NULL;
+
+ lruvecs->vecs[0] = NULL;
+ for (i = nr_lruvec = 0; i < pagevec_count(pvec); i++) {
+ page = pvec->pages[i];
+ lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page));
+
+ /* Try to find a same lruvec */
+ for (j = 0; j <= nr_lruvec; j++)
+ if (lruvec == lruvecs->vecs[j])
+ break;
+
+ /* A new lruvec */
+ if (j > nr_lruvec) {
+ INIT_LIST_HEAD(&lruvecs->lists[nr_lruvec]);
+ lruvecs->vecs[nr_lruvec] = lruvec;
+ j = nr_lruvec++;
+ lruvecs->vecs[nr_lruvec] = 0;
+ }
+
+ list_add_tail(&page->lru, &lruvecs->lists[j]);
+ }
+
+ return nr_lruvec;
+}
+
/*
* Add the passed pages to the LRU, then drop the caller's refcount
* on them. Reinitialises the caller's pagevec.
*/
void __pagevec_lru_add(struct pagevec *pvec)
{
- int i;
- struct lruvec *lruvec = NULL;
+ int i, nr_lruvec;
unsigned long flags = 0;
+ struct page *page;
+ struct lruvecs lruvecs;

- for (i = 0; i < pagevec_count(pvec); i++) {
- struct page *page = pvec->pages[i];
+ nr_lruvec = sort_page_lruvec(&lruvecs, pvec);

- lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags);
- __pagevec_lru_add_fn(page, lruvec);
+ for (i = 0; i < nr_lruvec; i++) {
+ spin_lock_irqsave(&lruvecs.vecs[i]->lru_lock, flags);
+ while (!list_empty(&lruvecs.lists[i])) {
+ page = lru_to_page(&lruvecs.lists[i]);
+ list_del(&page->lru);
+ __pagevec_lru_add_fn(page, lruvecs.vecs[i]);
+ }
+ spin_unlock_irqrestore(&lruvecs.vecs[i]->lru_lock, flags);
}
- if (lruvec)
- unlock_page_lruvec_irqrestore(lruvec, flags);
+
release_pages(pvec->pages, pvec->nr);
pagevec_reinit(pvec);
}
--
2.29.GIT
\
 
 \ /
  Last update: 2020-11-20 09:28    [W:0.087 / U:0.240 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site