| From | Greg Kroah-Hartman <> | Subject | [PATCH 4.9 26/66] arm64: Move BP hardening to check_and_switch_context | Date | Tue, 17 Apr 2018 17:58:59 +0200 |
| |
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mark Rutland <mark.rutland@arm.com>
From: Marc Zyngier <marc.zyngier@arm.com>
commit a8e4c0a919ae310944ed2c9ace11cf3ccd8a609b upstream.
We call arm64_apply_bp_hardening() from post_ttbr_update_workaround, which has the unexpected consequence of being triggered on every exception return to userspace when ARM64_SW_TTBR0_PAN is selected, even if no context switch actually occured.
This is a bit suboptimal, and it would be more logical to only invalidate the branch predictor when we actually switch to a different mm.
In order to solve this, move the call to arm64_apply_bp_hardening() into check_and_switch_context(), where we're guaranteed to pick a different mm context.
Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport] Tested-by: Greg Hackmann <ghackmann@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> --- arch/arm64/mm/context.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
--- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -230,6 +230,9 @@ void check_and_switch_context(struct mm_ raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); switch_mm_fastpath: + + arm64_apply_bp_hardening(); + cpu_switch_mm(mm->pgd, mm); } @@ -240,8 +243,6 @@ asmlinkage void post_ttbr_update_workaro "ic iallu; dsb nsh; isb", ARM64_WORKAROUND_CAVIUM_27456, CONFIG_CAVIUM_ERRATUM_27456)); - - arm64_apply_bp_hardening(); } static int asids_init(void)
|