lkml.org 
[lkml]   [2020]   [Nov]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 6/6] mm: proc: Avoid fullmm flush for young/dirty bit toggling
Date
clear_refs_write() uses the 'fullmm' API for invalidating TLBs after
updating the page-tables for the current mm. However, since the mm is not
being freed, this can result in stale TLB entries on architectures which
elide 'fullmm' invalidation.

Ensure that TLB invalidation is performed after updating soft-dirty
entries via clear_refs_write() by using the non-fullmm API to MMU gather.

Signed-off-by: Will Deacon <will@kernel.org>
---
fs/proc/task_mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index a76d339b5754..316af047f1aa 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -1238,7 +1238,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
count = -EINTR;
goto out_mm;
}
- tlb_gather_mmu_fullmm(&tlb, mm);
+ tlb_gather_mmu(&tlb, mm, 0, TASK_SIZE);
if (type == CLEAR_REFS_SOFT_DIRTY) {
for (vma = mm->mmap; vma; vma = vma->vm_next) {
if (!(vma->vm_flags & VM_SOFTDIRTY))
--
2.29.2.454.gaff20da3a2-goog
\
 
 \ /
  Last update: 2020-11-20 15:37    [W:0.827 / U:0.076 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site