lkml.org 
[lkml]   [2020]   [Nov]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH 6/6] mm: proc: Avoid fullmm flush for young/dirty bit toggling
On Fri, Nov 20, 2020 at 02:35:57PM +0000, Will Deacon wrote:
> clear_refs_write() uses the 'fullmm' API for invalidating TLBs after
> updating the page-tables for the current mm. However, since the mm is not
> being freed, this can result in stale TLB entries on architectures which
> elide 'fullmm' invalidation.
>
> Ensure that TLB invalidation is performed after updating soft-dirty
> entries via clear_refs_write() by using the non-fullmm API to MMU gather.
>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
> fs/proc/task_mmu.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index a76d339b5754..316af047f1aa 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -1238,7 +1238,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
> count = -EINTR;
> goto out_mm;
> }
> - tlb_gather_mmu_fullmm(&tlb, mm);
> + tlb_gather_mmu(&tlb, mm, 0, TASK_SIZE);

Let's assume my reply to patch 4 is wrong, and therefore we still need
tlb_gather/finish_mmu() here. But then wouldn't this change deprive
architectures other than ARM the opportunity to optimize based on the
fact it's a full-mm flush?

It seems to me ARM's interpretation of tlb->fullmm is a special case,
not the other way around.

\
 
 \ /
  Last update: 2020-11-20 21:40    [W:1.019 / U:1.608 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site