lkml.org 
[lkml]   [2022]   [Nov]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subjectmm: mmu_gather: do not expose delayed_rmap flag
Flag delayed_rmap of 'struct mmu_gather' is rather
a private member, but it is still accessed directly.
Instead, let the TLB gather code access the flag.

Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
---
mm/memory.c | 3 +--
mm/mmu_gather.c | 3 +++
2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 42f10cc1de58..38b58cd07b52 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1465,8 +1465,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
/* Do the actual TLB flush before dropping ptl */
if (force_flush) {
tlb_flush_mmu_tlbonly(tlb);
- if (tlb->delayed_rmap)
- tlb_flush_rmaps(tlb, vma);
+ tlb_flush_rmaps(tlb, vma);
}
pte_unmap_unlock(start_pte, ptl);

diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index 79de59136cd2..9f22309affee 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -60,6 +60,9 @@ void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma)
{
struct mmu_gather_batch *batch;

+ if (!tlb->delayed_rmap)
+ return;
+
batch = tlb->active;
for (int i = 0; i < batch->nr; i++) {
struct encoded_page *enc = batch->encoded_pages[i];
--
2.31.1
\
 
 \ /
  Last update: 2022-11-16 08:50    [W:0.107 / U:0.060 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site