lkml.org 
[lkml]   [2014]   [Jan]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v2 5/7] ARM: perf_event: Fully support Krait CPU PMU events
    Hi Stephen,

    Thanks for the updates. A few more comments inline.

    On Wed, Jan 15, 2014 at 05:55:33PM +0000, Stephen Boyd wrote:
    > Krait supports a set of performance monitor region event
    > selection registers (PMRESR) sitting behind a cp15 based
    > interface that extend the architected PMU events to include Krait
    > CPU and Venum VFP specific events. To use these events the user
    > is expected to program the region register (PMRESRn) with the
    > event code shifted into the group they care about and then point
    > the PMNx event at that region+group combo by writing a
    > PMRESRn_GROUPx event. Add support for this hardware.
    >
    > Note: the raw event number is a pure software construct that
    > allows us to map the multi-dimensional number space of regions,
    > groups, and event codes into a flat event number space suitable
    > for use by the perf framework.

    [...]

    > +static u32 krait_read_pmresrn(int n)
    > +{
    > + u32 val;
    > +
    > + switch (n) {
    > + case 0:
    > + asm volatile("mrc p15, 1, %0, c9, c15, 0" : "=r" (val));
    > + break;
    > + case 1:
    > + asm volatile("mrc p15, 1, %0, c9, c15, 1" : "=r" (val));
    > + break;
    > + case 2:
    > + asm volatile("mrc p15, 1, %0, c9, c15, 2" : "=r" (val));
    > + break;
    > + default:
    > + BUG(); /* Should be validated in krait_pmu_get_event_idx() */
    > + }
    > +
    > + return val;
    > +}
    > +
    > +static void krait_write_pmresrn(int n, u32 val)
    > +{
    > + switch (n) {
    > + case 0:
    > + asm volatile("mcr p15, 1, %0, c9, c15, 0" : : "r" (val));
    > + break;
    > + case 1:
    > + asm volatile("mcr p15, 1, %0, c9, c15, 1" : : "r" (val));
    > + break;
    > + case 2:
    > + asm volatile("mcr p15, 1, %0, c9, c15, 2" : : "r" (val));
    > + break;
    > + default:
    > + BUG(); /* Should be validated in krait_pmu_get_event_idx() */
    > + }
    > +}

    Do you need isbs to ensure the pmresrn side-effects have happened, or are
    the registers self-synchronising? Similarly for your other IMP DEF
    registers.

    > +static void krait_pre_vpmresr0(u32 *venum_orig_val, u32 *fp_orig_val)
    > +{
    > + u32 venum_new_val;
    > + u32 fp_new_val;
    > +
    > + /* CPACR Enable CP10 and CP11 access */
    > + *venum_orig_val = get_copro_access();
    > + venum_new_val = *venum_orig_val | CPACC_SVC(10) | CPACC_SVC(11);
    > + set_copro_access(venum_new_val);
    > +
    > + /* Enable FPEXC */
    > + *fp_orig_val = fmrx(FPEXC);
    > + fp_new_val = *fp_orig_val | FPEXC_EN;
    > + fmxr(FPEXC, fp_new_val);

    Messing around with the lot (especially with kernel-mode neon now in
    mainline) does scare me. I'd like some BUG_ON(preemptible()) and you could
    consider using kernel_neon_{begin,end} but they're a lot heavier than you
    need (due to non-lazy switching)

    Finally, I'd really like to see this get some test coverage, but I don't
    want to try running mainline on my phone :) Could you give your patches a
    spin with Vince's perf fuzzer please?

    https://github.com/deater/perf_event_tests.git

    (then build the contents of the fuzzer directory and run it for as long as
    you can).

    Cheers,

    Will


    \
     
     \ /
      Last update: 2014-01-21 19:41    [W:18.621 / U:0.068 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site