lkml.org 
[lkml]   [2021]   [Aug]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [RFC 2/3] perf/x86: Control RDPMC access from .enable() hook
On Thu, Aug 12, 2021 at 11:50 AM Andy Lutomirski <luto@kernel.org> wrote:
>
> On 7/28/21 4:02 PM, Rob Herring wrote:
> > Rather than controlling RDPMC access behind the scenes from switch_mm(),
> > move RDPMC access controls to the PMU .enable() hook. The .enable() hook
> > is called whenever the perf CPU or task context changes which is when
> > the RDPMC access may need to change.
> >
> > This is the first step in moving the RDPMC state tracking out of the mm
> > context to the perf context.
>
> Is this series supposed to be a user-visible change or not? I'm confused.

It should not be user-visible. Or at least not user-visible for what
any user would notice. If an event is not part of the perf context on
another thread sharing the mm, does that thread need rdpmc access? No
access would be a user-visible change, but I struggle with how that's
a useful scenario?

> If you intend to have an entire mm have access to RDPMC if an event is
> mapped, then surely access needs to be context switched for the whole
> mm. If you intend to only have the thread to which the event is bound
> have access, then the only reason I see to use IPIs is to revoke access
> on munmap from the wrong thread. But even that latter case could be
> handled with a more targeted approach, not a broadcast to all of mm_cpumask.

Right, that's what patch 3 does. When we mmap/munmap an event, then
the perf core invokes the callback only on the active contexts in
which the event resides.

> Can you clarify what the overall intent is and what this particular
> patch is trying to do?
>
> >
> > if (atomic_dec_and_test(&mm->context.perf_rdpmc_allowed))
> > - on_each_cpu_mask(mm_cpumask(mm), cr4_update_pce, NULL, 1);
> > + on_each_cpu_mask(mm_cpumask(mm), x86_pmu_set_user_access_ipi, NULL, 1);
>
> Here you do something for the whole mm.

It's just a rename of the callback though.

>
> > - on_each_cpu(cr4_update_pce, NULL, 1);
> > + on_each_cpu(x86_pmu_set_user_access_ipi, NULL, 1);
>
> Here too.

It's not just the whole mm here as changing sysfs rdpmc permission is
a global state.

> > void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
> > struct task_struct *tsk)
> > {
> > @@ -581,10 +556,8 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
> > this_cpu_write(cpu_tlbstate.loaded_mm, next);
> > this_cpu_write(cpu_tlbstate.loaded_mm_asid, new_asid);
> >
> > - if (next != real_prev) {
> > - cr4_update_pce_mm(next);
> > + if (next != real_prev)
> > switch_ldt(real_prev, next);
> > - }
> > }
>
> But if you remove this, then what handles context switching?

perf. On a context switch, perf is going to context switch the events
and set the access based on an event in the context being mmapped.
Though I guess if rdpmc needs to be enabled without any events opened,
this is not going to work. So maybe I need to keep the
rdpmc_always_available_key and rdpmc_never_available_key cases here.

The always available case is something we specifically don't want to
support for arm64. I'm trying to start with access more locked down,
rather than trying to lock it down after the fact as x86 is doing.

Rob

\
 
 \ /
  Last update: 2021-08-12 20:18    [W:0.094 / U:1.416 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site