lkml.org 
[lkml]   [2015]   [Jan]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH v4 10/11] perf/x86/intel: Perform rotation on Intel CQM RMIDs
On Fri, Nov 14, 2014 at 09:15:11PM +0000, Matt Fleming wrote:
> @@ -417,17 +857,38 @@ static u64 intel_cqm_event_count(struct perf_event *event)
> if (!cqm_group_leader(event))
> return 0;
>
> - on_each_cpu_mask(&cqm_cpumask, __intel_cqm_event_count, &rr, 1);
> + /*
> + * Notice that we don't perform the reading of an RMID
> + * atomically, because we can't hold a spin lock across the
> + * IPIs.
> + *
> + * Speculatively perform the read, since @event might be
> + * assigned a different (possibly invalid) RMID while we're
> + * busying performing the IPI calls. It's therefore necessary to
> + * check @event's RMID afterwards, and if it has changed,
> + * discard the result of the read.
> + */
> + raw_spin_lock_irqsave(&cache_lock, flags);
> + rr.rmid = event->hw.cqm_rmid;
> + raw_spin_unlock_irqrestore(&cache_lock, flags);

You don't actually have to hold the lock here, only ACCESS_ONCE() or
whatever newfangled thing replaced that.

> +
> + if (!__rmid_valid(rr.rmid))
> + goto out;
>
> - local64_set(&event->count, atomic64_read(&rr.value));
> + on_each_cpu_mask(&cqm_cpumask, __intel_cqm_event_count, &rr, 1);
>
> + raw_spin_lock_irqsave(&cache_lock, flags);
> + if (event->hw.cqm_rmid == rr.rmid)
> + local64_set(&event->count, atomic64_read(&rr.value));
> + raw_spin_unlock_irqrestore(&cache_lock, flags);

Here you do indeed need the lock as its more than a single op :-)

> +out:
> return __perf_event_count(event);
> }


\
 
 \ /
  Last update: 2015-01-06 19:01    [W:0.229 / U:0.936 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site