lkml.org 
[lkml]   [2023]   [Sep]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v6 1/3] x86/fpu: Measure the Latency of XSAVES and XRSTORS
On 02.09.2023 12:09, Andi Kleen wrote:
>> Instead of adding overhead to the regular FPU context saving/restoring code
>> paths, could you add a helper function that has tracing code included, but
>> which isn't otherwise used - and leave the regular code with no tracing
>> overhead?
>>

>> This puts a bit of a long-term maintenance focus on making sure that the
>> traced functionality won't bitrot, but I'd say that's preferable to adding
>> tracing overhead.
>
>Or just use PT
>
>% sudo perf record --kcore -e intel_pt/cyc=1,cyc_thresh=1/k --filter 'filter save_fpregs_to_fpstate' -a sleep 5
>% sudo perf script --insn-trace --xed -F -comm,-tid,-dso,-sym,-symoff,+ipc
>[000] 677203.751913565: ffffffffa7046230 nopw %ax, (%rax)
>[000] 677203.751913565: ffffffffa7046234 nopl %eax, (%rax,%rax,1)
>[000] 677203.751913565: ffffffffa7046239 mov %rdi, %rcx
>[000] 677203.751913565: ffffffffa704623c nopl %eax, (%rax,%rax,1)
>[000] 677203.751913565: ffffffffa7046241 movq
>0x10(%rdi), %rsi
>[000] 677203.751913565: ffffffffa7046245 movq 0x8(%rsi), %rax
>[000] 677203.751913565: ffffffffa7046249 leaq 0x40(%rsi), %rdi
>[000] 677203.751913565: ffffffffa704624d mov %rax, %rdx
>[000] 677203.751913565: ffffffffa7046250 shr $0x20, %rdx
>[000] 677203.751913565: ffffffffa7046254 xsaves64 (%rdi)
>[000] 677203.751913565: ffffffffa7046258 xor %edi, %edi
>[000] 677203.751913565: ffffffffa704625a movq 0x10(%rcx), %rax
>[000] 677203.751913565: ffffffffa704625e testb $0xc0, 0x240(%rax)
>[000] 677203.751913636: ffffffffa7046265 jz 0xffffffffa7046285 IPC: 0.16 (14/85)
>...
>
>
>So it took 85 cycles here.
>
>(it includes a few extra instructions, but I bet they're less than what
>ftrace adds. This example is for XSAVE, but can be similarly extended
>for XRSTOR)
>
Hi Andi,
Thank you for your guidance on Intel PT.

I recall that we have discussed this topic via email before.
I have compared the two methods that calculate the latency:
1. Calculate using perf-intel-pt with functions filter.
2. Calculate the tsc delta explicitly in kernel, and dump the delta by a
single trace point as what this patch does.

My findings are:
1. Intel-pt is the most accurate method, but it's likely just a one-time
exercise because 'filter with function' requires rebuilding the kernel
and changing the definition of functions 'os_xsave' and 'os_xrstor' into
'noinline' instead of 'inline'.
2. I collected the latency data with the two methods, and the method in
this patch can achieve results that are close to those with intel-pt.
And it only introduces a negligible impact on the performance when the
trace is disabled, as I explained to Ingo earlier.

Hope this clarifies my approach. We're using this patch set to do tests
on Intel's new brand chipsets.

Thanks
--Sun, Yi

\
 
 \ /
  Last update: 2023-09-06 11:20    [W:0.092 / U:0.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site