lkml.org 
[lkml]   [2014]   [Nov]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v3 10/11] perf/x86/intel: Perform rotation on Intel CQM RMIDs
    On Thu, Nov 06, 2014 at 12:23:21PM +0000, Matt Fleming wrote:
    > +/*
    > + * Exchange the RMID of a group of events.
    > + */
    > +static unsigned int
    > +intel_cqm_xchg_rmid(struct perf_event *group, unsigned int rmid)
    > +{
    > + struct perf_event *event;
    > + unsigned int old_rmid = group->hw.cqm_rmid;
    > + struct list_head *head = &group->hw.cqm_group_entry;
    > +
    > + lockdep_assert_held(&cache_mutex);
    > +
    > + /*
    > + * If our RMID is being deallocated, perform a read now.
    > + */
    > + if (__rmid_valid(old_rmid) && !__rmid_valid(rmid)) {
    > + struct intel_cqm_count_info info;
    > +
    > + local64_set(&group->count, 0);
    > + info.event = group;
    > +
    > + preempt_disable();
    > + smp_call_function_many(&cqm_cpumask, __intel_cqm_event_count,
    > + &info, 1);
    > + preempt_enable();
    > + }

    This suffers the same issue as before, why not call that one function
    and not reimplement it?

    Also, I don't think we'd ever swap an rmid for another valid one, right?
    So we could do this read/update unconditionally.

    > +
    > + raw_spin_lock_irq(&cache_lock);
    > +
    > + group->hw.cqm_rmid = rmid;
    > + list_for_each_entry(event, head, hw.cqm_group_entry)
    > + event->hw.cqm_rmid = rmid;
    > +
    > + raw_spin_unlock_irq(&cache_lock);
    > +
    > + return old_rmid;
    > +}


    \
     
     \ /
      Last update: 2014-11-07 13:21    [W:4.570 / U:0.024 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site