lkml.org 
[lkml]   [2020]   [Nov]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
SubjectRe: [RFC 1/2] perf/core: Enable sched_task callbacks if PMU has it
From
Date


On 11/5/2020 10:45 AM, Namhyung Kim wrote:
> Hello,
>
> On Thu, Nov 5, 2020 at 11:47 PM Liang, Kan <kan.liang@linux.intel.com> wrote:
>>
>>
>>
>> On 11/2/2020 9:52 AM, Namhyung Kim wrote:
>>> If an event associated with a PMU which has a sched_task callback,
>>> it should be called regardless of cpu/task context. For example,
>>
>>
>> I don't think it's necessary. We should call it when we have to.
>> Otherwise, it just waste cycles.
>> Shouldn't the patch 2 be enough?
>
> I'm not sure, without this patch __perf_event_task_sched_in/out
> cannot be called for per-cpu events (w/o cgroups) IMHO.
> And I could not find any other place to check the
> perf_sched_cb_usages.
>

Yes, it should a bug for large PEBS, and it should has always been there
since the large PEBS was introduced. I just tried some older kernels
(before recent change). Large PEBS is not flushed with per-cpu events.

But from your description, it looks like the issue is only found after
recent change. Could you please double check if the issue can also be
reproduced before the recent change?


>>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>>> index b458ed3dc81b..aaa0155c4142 100644
>>> --- a/kernel/events/core.c
>>> +++ b/kernel/events/core.c
>>> @@ -4696,6 +4696,8 @@ static void unaccount_event(struct perf_event *event)
>>> dec = true;
>>> if (has_branch_stack(event))
>>> dec = true;
>>> + if (event->pmu->sched_task)
>>> + dec = true;

I think sched_task is a too big hammer. The
__perf_event_task_sched_in/out will be invoked even for non-pebs per-cpu
events, which is not necessary.

Maybe we can introduce a flag to indicate the case. How about the patch
as below?

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index c79748f6921d..953a4bb98330 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3565,8 +3565,10 @@ static int intel_pmu_hw_config(struct perf_event
*event)
if (!(event->attr.freq || (event->attr.wakeup_events &&
!event->attr.watermark))) {
event->hw.flags |= PERF_X86_EVENT_AUTO_RELOAD;
if (!(event->attr.sample_type &
- ~intel_pmu_large_pebs_flags(event)))
+ ~intel_pmu_large_pebs_flags(event))) {
event->hw.flags |= PERF_X86_EVENT_LARGE_PEBS;
+ event->attach_state |= PERF_ATTACH_SCHED_DATA;
+ }
}
if (x86_pmu.pebs_aliases)
x86_pmu.pebs_aliases(event);
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 0defb526cd0c..3eef7142aa11 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -606,6 +606,7 @@ struct swevent_hlist {
#define PERF_ATTACH_TASK 0x04
#define PERF_ATTACH_TASK_DATA 0x08
#define PERF_ATTACH_ITRACE 0x10
+#define PERF_ATTACH_SCHED_DATA 0x20

struct perf_cgroup;
struct perf_buffer;
diff --git a/kernel/events/core.c b/kernel/events/core.c
index dba4ea4e648b..a38133b5543a 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -4675,7 +4675,7 @@ static void unaccount_event(struct perf_event *event)
if (event->parent)
return;

- if (event->attach_state & PERF_ATTACH_TASK)
+ if (event->attach_state & (PERF_ATTACH_TASK | PERF_ATTACH_SCHED_DATA))
dec = true;
if (event->attr.mmap || event->attr.mmap_data)
atomic_dec(&nr_mmap_events);
@@ -11204,7 +11204,7 @@ static void account_event(struct perf_event *event)
if (event->parent)
return;

- if (event->attach_state & PERF_ATTACH_TASK)
+ if (event->attach_state & (PERF_ATTACH_TASK | PERF_ATTACH_SCHED_DATA))
inc = true;
if (event->attr.mmap || event->attr.mmap_data)
atomic_inc(&nr_mmap_events);
Thanks,
Kan

\
 
 \ /
  Last update: 2020-11-05 20:02    [W:0.083 / U:0.488 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site