lkml.org 
[lkml]   [2016]   [Mar]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [tip:perf/urgent] perf/x86/mbm: Implement RMID recycling
On Mon, 21 Mar, at 02:53:04AM, tip-bot for Vikas Shivappa wrote:
> @@ -489,6 +496,22 @@ static u32 intel_cqm_xchg_rmid(struct perf_event *group, u32 rmid)
>
> raw_spin_unlock_irq(&cache_lock);
>
> + /*
> + * If the allocation is for mbm, init the mbm stats.
> + * Need to check if each event in the group is mbm event
> + * because there could be multiple type of events in the same group.
> + */
> + if (__rmid_valid(rmid)) {
> + event = group;
> + if (is_mbm_event(event->attr.config))
> + init_mbm_sample(rmid, event->attr.config);
> +
> + list_for_each_entry(event, head, hw.cqm_group_entry) {
> + if (is_mbm_event(event->attr.config))
> + init_mbm_sample(rmid, event->attr.config);
> + }
> + }
> +
> return old_rmid;
> }
>

You're calling init_mbm_sample() without holding cache_lock. Won't
this potentially trash the existing value in MSR_IA32_QM_EVTSEL, if
say, we're reading the counter at the same time as the recycling
worker is running?

\
 
 \ /
  Last update: 2016-03-21 16:41    [W:0.070 / U:1.084 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site