lkml.org 
[lkml]   [2021]   [Dec]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH/RFC v2 3/3] tlb: mmu_gather: use batched table free if possible
Date
In case when __tlb_remove_table() is implemented via
free_page_and_swap_cache(), use free_pages_and_swap_cache_nolru() for
batch table removal.

This enables use of single release_pages() call instead of a loop
calling put_page(). This shall have better performance, especially when
memcg accounting is enabled.

Signed-off-by: Nikita Yushchenko <nikita.yushchenko@virtuozzo.com>
---
mm/mmu_gather.c | 17 +++++++++++++----
1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index eb2f30a92462..2e75d396bbad 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -98,15 +98,24 @@ static inline void __tlb_remove_table(void *table)
{
free_page_and_swap_cache((struct page *)table);
}
-#endif

-static void __tlb_remove_table_free(struct mmu_table_batch *batch)
+static inline void __tlb_remove_tables(void **tables, int nr)
+{
+ free_pages_and_swap_cache_nolru((struct page **)tables, nr);
+}
+#else
+static inline void __tlb_remove_tables(void **tables, int nr)
{
int i;

- for (i = 0; i < batch->nr; i++)
- __tlb_remove_table(batch->tables[i]);
+ for (i = 0; i < nr; i++)
+ __tlb_remove_table(tables[i]);
+}
+#endif

+static void __tlb_remove_table_free(struct mmu_table_batch *batch)
+{
+ __tlb_remove_tables(batch->tables, batch->nr);
free_page((unsigned long)batch);
}

--
2.30.2
\
 
 \ /
  Last update: 2021-12-18 19:54    [W:0.328 / U:0.508 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site