lkml.org 
[lkml]   [2019]   [Mar]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH 1/8] perf/x86/intel: Fix memory corruption
On Wed, Mar 20, 2019 at 11:22:20PM +0100, Peter Zijlstra wrote:
> On Wed, Mar 20, 2019 at 01:47:28PM -0700, Stephane Eranian wrote:
>
> > Right now, if I do:
> >
> > echo 0 > /sys/bus/event_source/devices/cpu/allow_tsx_force_abort
> >
> > Then I don't have the guarantee on when there will be no abort when I
> > return from the echo. the MSR is accessed only on PMU scheduling. I
> > would expect a sysadmin to want some guarantee if this is to be
> > switched on/off at runtime. If not, then having a boot time option is
> > better in my opinion.
>
> Something like cycling the nmi watchdog or:
>
> perf stat -a -e cycles sleep 1
>
> should be enough to force reschedule the events on every CPU.
>
> Again, I'm not adverse to 'fixing' this if it can be done with limited
> LoC. But I don't really see this as critical.
>
> > This other bit I noticed is that cpuc->tfa_shadow is used to avoid the
> > wrmsr(), but I don't see the code that makes sure the init value (0)
> > matches the value of the MSR. Is this MSR guarantee to be zero on
> > reset?
>
> That was my understanding.
>
> > How about on kexec()?
>
> Good point, we might want to fix that.

Something like the below perhaps?

---
Subject: perf/x86/intel: Initialize TFA MSR

Stephane reported that we don't initialize the TFA MSR, which could lead
to trouble if the RESET value is not 0 or on kexec.

Reported-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
arch/x86/events/intel/core.c | 6 ++++++
1 file changed, 6 insertions(+)

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 8baa441d8000..2d3caf2d1384 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3575,6 +3575,12 @@ static void intel_pmu_cpu_starting(int cpu)

cpuc->lbr_sel = NULL;

+ if (x86_pmu.flags & PMU_FL_TFA) {
+ WARN_ON_ONCE(cpuc->tfa_shadow);
+ cpuc->tfa_shadow = ~0ULL;
+ intel_set_tfa(cpuc, false);
+ }
+
if (x86_pmu.version > 1)
flip_smm_bit(&x86_pmu.attr_freeze_on_smi);

\
 
 \ /
  Last update: 2019-03-21 13:40    [W:1.240 / U:0.272 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site