lkml.org 
[lkml]   [2020]   [May]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH 5/5] arm64: perf: Add cap_user_time_short
This completes the ARM64 cap_user_time support.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
arch/arm64/kernel/perf_event.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)

--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -1173,6 +1173,7 @@ void arch_perf_update_userpage(struct pe

userpg->cap_user_time = 0;
userpg->cap_user_time_zero = 0;
+ userpg->cap_user_time_short = 0;

do {
rd = sched_clock_read_begin(&seq);
@@ -1183,13 +1184,13 @@ void arch_perf_update_userpage(struct pe
userpg->time_mult = rd->mult;
userpg->time_shift = rd->shift;
userpg->time_zero = rd->epoch_ns;
+ userpg->time_cycle = rd->epoch_cyc;
+ userpg->time_mask = rd->sched_clock_mask;

/*
- * This isn't strictly correct, the ARM64 counter can be
- * 'short' and then we get funnies when it wraps. The correct
- * thing would be to extend the perf ABI with a cycle and mask
- * value, but because wrapping on ARM64 is very rare in
- * practise this 'works'.
+ * Subtract the cycle base, such that software that
+ * doesn't know about cap_user_time_short still 'works'
+ * assuming no wraps.
*/
userpg->time_zero -= (rd->epoch_cyc * rd->mult) >> rd->shift;

@@ -1214,4 +1215,5 @@ void arch_perf_update_userpage(struct pe
*/
userpg->cap_user_time = 1;
userpg->cap_user_time_zero = 1;
+ userpg->cap_user_time_short = 1;
}

\
 
 \ /
  Last update: 2020-05-12 14:48    [W:0.119 / U:1.560 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site