lkml.org 
[lkml]   [2016]   [Feb]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v10 3/4] x86, mce: Add __mcsafe_copy()
On Wed, Feb 10, 2016 at 11:39:05AM -0800, Luck, Tony wrote:
> On Wed, Feb 10, 2016 at 11:58:43AM +0100, Borislav Petkov wrote:
> > But one could take out that function do some microbenchmarking with
> > different sizes and once with the current version and once with the
> > pushes and pops of r1[2-5] to see where the breakeven is.
>
> On a 4K page copy from a source address that isn't in the
> cache I see all sorts of answers.
>
> On my desktop (i7-3960X) it is ~50 cycles slower to push and pop the four
> registers.
>
> On my latest Xeon - I can't post benchmarks ... but also a bit slower.
>
> On an older Xeon it is a few cycles faster (but even though I'm
> looking at the median of 10,000 runs I see more run-to-run variation
> that I see difference between register choices.

Hmm, strange. Can you check whether perf doesn't show any significant
differences too. Something like:

perf stat --repeat 100 --sync --pre 'echo 3 > /proc/sys/vm/drop_caches' -- ./mcsafe_copy_1

and then

perf stat --repeat 100 --sync --pre 'echo 3 > /proc/sys/vm/drop_caches' -- ./mcsafe_copy_2

That'll be interesting...

Thanks.

--
Regards/Gruss,
Boris.

ECO tip #101: Trim your mails when you reply.

\
 
 \ /
  Last update: 2016-02-10 22:01    [W:0.280 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site