Messages in this thread | | | Date | Wed, 10 Feb 2016 11:39:05 -0800 | From | "Luck, Tony" <> | Subject | Re: [PATCH v10 3/4] x86, mce: Add __mcsafe_copy() |
| |
On Wed, Feb 10, 2016 at 11:58:43AM +0100, Borislav Petkov wrote: > But one could take out that function do some microbenchmarking with > different sizes and once with the current version and once with the > pushes and pops of r1[2-5] to see where the breakeven is.
On a 4K page copy from a source address that isn't in the cache I see all sorts of answers.
On my desktop (i7-3960X) it is ~50 cycles slower to push and pop the four registers.
On my latest Xeon - I can't post benchmarks ... but also a bit slower.
On an older Xeon it is a few cycles faster (but even though I'm looking at the median of 10,000 runs I see more run-to-run variation that I see difference between register choices.
Here's what I tested:
push %r12 push %r13 push %r14 push %r15
/* Loop copying whole cache lines */ 1: movq (%rsi),%r8 2: movq 1*8(%rsi),%r9 3: movq 2*8(%rsi),%r10 4: movq 3*8(%rsi),%r11 9: movq 4*8(%rsi),%r12 10: movq 5*8(%rsi),%r13 11: movq 6*8(%rsi),%r14 12: movq 7*8(%rsi),%r15 movq %r8,(%rdi) movq %r9,1*8(%rdi) movq %r10,2*8(%rdi) movq %r11,3*8(%rdi) movq %r12,4*8(%rdi) movq %r13,5*8(%rdi) movq %r14,6*8(%rdi) movq %r15,7*8(%rdi) leaq 64(%rsi),%rsi leaq 64(%rdi),%rdi decl %ecx jnz 1b
pop %r15 pop %r14 pop %r13 pop %r12 -Tony
| |