lkml.org 
[lkml]   [2018]   [Feb]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 1/1] perf: Add CPU hotplug support for events
From
Date


On 02/16/2018 12:21 AM, Peter Zijlstra wrote:
> On Thu, Feb 15, 2018 at 03:01:41PM -0800, Raghavendra Rao Ananta wrote:
>> Perf framework doesn't allow prevserving CPU events across
>> CPU hotplugs. The events are scheduled out as and when the
>> CPU walks offline. Moreover, the framework also doesn't
>> allow the clients to create events on an offline CPU. As
>> a result, the clients have to keep on monitoring the CPU
>> state until it comes back online.
>>
>> Therefore, introducing the perf framework to support creation
>> and preserving of (CPU) events for offline CPUs. Through
>> this, the CPU's online state would be transparent to the
>> client and it not have to worry about monitoring the CPU's
>> state. Success would be returned to the client even while
>> creating the event on an offline CPU. If during the lifetime
>> of the event the CPU walks offline, the event would be
>> preserved and would continue to count as soon as (and if) the
>> CPU comes back online.
>>
>> Signed-off-by: Raghavendra Rao Ananta <rananta@codeaurora.org>
>> ---
>> include/linux/perf_event.h | 7 +++
>> kernel/events/core.c | 123 +++++++++++++++++++++++++++++++++------------
>> 2 files changed, 97 insertions(+), 33 deletions(-)
>>
>> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
>> index 7546822..bc07f16 100644
>> --- a/include/linux/perf_event.h
>> +++ b/include/linux/perf_event.h
>> @@ -489,6 +489,7 @@ struct perf_addr_filters_head {
>> * enum perf_event_state - the states of a event
>> */
>> enum perf_event_state {
>> + PERF_EVENT_STATE_DORMANT = -5,
>> PERF_EVENT_STATE_DEAD = -4,
>> PERF_EVENT_STATE_EXIT = -3,
>> PERF_EVENT_STATE_ERROR = -2,
>> @@ -687,6 +688,12 @@ struct perf_event {
>> #endif
>>
>> struct list_head sb_list;
>> +
>> + /* Entry into the list that holds the events whose CPUs
>> + * are offline. These events will be removed from the
>> + * list and installed once the CPU wakes up.
>> + */
>> + struct list_head dormant_entry;
>
> No this is absolutely disguisting. You can simply keep the events in the
> dead CPU's context. It's really not that hard.
Keeping the events in the dead CPU's context was also an idea that we
had. However, detaching that event from the PMU when the CPU is offline
would be a pain. Consider the scenario in which an event is about to be
destroyed when the CPU is offline (yet still attached to the CPU).
During it's destruction, a cross-cpu call is made (from
perf_remove_from_context()) to the offlined CPU to detach the event from
the CPU's PMU. As the CPU is offline, that would not be possible, and
again a separate logic has to be written for cleaning up the events
whose CPUs are offlined. Hence, I thought it would be a cleaner way to
maintain the events.
>
> Also, you _still_ don't explain why you care about dead CPUs.
>

It's just not only about dead CPUs. It's the fact that the CPUs can come
and go online. The embedded world, specifically Android mobile SoCs,
rely on CPU hotplugs to manage power and thermal constraints. These
hotplugs can happen at a very rapid pace. Adjacently, they also rely on
many perf event counters for its management. Therefore, there is
a need to preserve these events across hotplugs.
In such a scenario, a perf client (kernel or user-space) can create
events even when the CPU is offline. If the CPU comes online during
the lifetime of the event, the registered event can start counting
spontaneously. As an extension to this, the events' count can also
be preserved across CPU hotplugs. This takes the burden off of the
clients to monitor the state of the CPU.

-- Raghavendra

--
Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

\
 
 \ /
  Last update: 2018-02-16 19:06    [W:0.059 / U:0.464 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site