lkml.org 
[lkml]   [2020]   [Apr]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH 3/3] arm64: tlb: skip tlbi broadcast
Hello Mark,

On Tue, Mar 31, 2020 at 10:45:11AM +0100, Mark Rutland wrote:
> Hi Andrea,
>
> On Mon, Mar 16, 2020 at 02:09:07PM +0000, Mark Rutland wrote:
> > AFAICT, this series relies on:
> >
> > * An ISB completing prior page table walks when updating TTBR. I don't
> > believe this is necessarily the case, given how things work for an
> > EL1->EL2 transition where there can be ongoing EL1 walks.
>
> I've had confirmation that a DSB is necessary (after the MSR and ISB) to
> complete any ongoing translation table walks for the stale context.
>
> Without a DSB, those walks can observe subsequent stores and encounter
> the usual set of CONSTRAINED UNPREDICTABLE behaviours (e.g. walking into
> MMIO with side-effects, continuing from amalgamted entries, etc). Those
> issues are purely to do with the walk, and apply regardless of whether
> the resulting translations are architecturally consumed.

Ok, sorry I didn't get it earlier... I attempted a quick fix below.

From ab30d8082be62fe24a97eceec5dbfeea8e278511 Mon Sep 17 00:00:00 2001
From: Andrea Arcangeli <aarcange@redhat.com>
Date: Tue, 31 Mar 2020 20:03:43 -0400
Subject: [PATCH 1/1] arm64: tlb: skip tlbi broadcast, fix speculative tlb
lookups

Without DSB in between "MSR; ISB" and "atomic_dec(&nr_active_mm)"
there's the risk a speculative pagecache lookup may still be walking
pagetables of the unloaded asid after nr_active_mm has been
decreased. In such case the remote CPU could free the pagetables and
reuse the memory without first issuing a tlbi broadcast, while the
speculative tlb lookup still runs on the unloaded asid. For this
reason the speculative pagetable walks needs to be flushed before
decreasing nr_active_mm.

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
arch/arm64/include/asm/mmu_context.h | 20 +++++++++++++++++++-
1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 9c66fe317e2f..d821ea3ce839 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -210,8 +210,18 @@ enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
if (per_cpu(cpu_not_lazy_tlb, cpu) &&
is_idle_task(tsk)) {
per_cpu(cpu_not_lazy_tlb, cpu) = false;
- if (!system_uses_ttbr0_pan())
+ if (!system_uses_ttbr0_pan()) {
cpu_set_reserved_ttbr0();
+ /*
+ * DSB will flush the speculative pagetable
+ * walks on the old asid. It's required before
+ * decreasing nr_active_mm because after
+ * decreasing nr_active_mm the tlbi broadcast
+ * may not happen on the unloaded asid before
+ * the pagetables are freed.
+ */
+ dsb(ish);
+ }
atomic_dec(&mm->context.nr_active_mm);
}
VM_WARN_ON(atomic_read(&mm->context.nr_active_mm) < 0);
@@ -249,6 +259,14 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
} else if (prev != next) {
atomic_inc(&next->context.nr_active_mm);
__switch_mm(next, cpu);
+ /*
+ * DSB will flush the speculative pagetable walks on the old
+ * asid. It's required before decreasing nr_active_mm because
+ * after decreasing nr_active_mm the tlbi broadcast may not
+ * happen on the unloaded asid before the pagetables are
+ * freed.
+ */
+ dsb(ish);
atomic_dec(&prev->context.nr_active_mm);
}
VM_WARN_ON(!atomic_read(&next->context.nr_active_mm));

I didn't test it yet, because this being a theoretical issue it is
better reviewed in the source.

> > * Walks never being initiated for `inactive` contexts within the current
> > translation regime. e.g. while ASID x is installed, never starting a
> > walk for ASID y. I can imagine that the architecture may permit a form
> > of this starting with intermediate walk entries in the TLBs.
>
> I'm still chasing this point.

Appreciated! I'll cross fingers you don't find the speculative lookups
can randomly start on unloaded ASID. That would also imply that it
would be impossible on arm64 to use different asid on different CPUs
as it is normally done on other arches.

Thanks,
Andrea

\
 
 \ /
  Last update: 2020-04-01 12:38    [W:0.081 / U:0.200 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site