lkml.org 
[lkml]   [2018]   [Feb]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v2 1/3] x86/entry: Clear extra registers beyond syscall arguments for 64bit kernels
On Mon, Feb 5, 2018 at 9:58 PM, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
> On Mon, Feb 5, 2018 at 1:33 PM, Dan Williams <dan.j.williams@intel.com> wrote:
>>
>> On a suggestion from Arjan it also appears worthwhile to interleave
>> 'mov' with 'xor'. Perf stat says that this test gets 3.45 instructions
>> per cycle:
>
> Ugh.
>
> A "xor %reg/reg" is two bytes (three for the high regs due to REX
> prefix). A "mov $0" is 7 bytes because unlike most of the ALU ops,
> "mov" doesn't have a 8-bit expanding immediate.
>
> So replacing those xors with movq's will add at least four bytes per
> replacement. So you may well end up adding an L1 cache miss.
>
> At which point "3.45 ipc" vs "2.88 ipc" is pretty much a non-issue.
>
> I suspect that a bigger win would be if you try to interleave those
> "xor" instructions with the "pushq" instructions in the entry code.
> Because those push instructions tend to be limited by the LSU store
> bandwidth, so you can probably put in xor instructions almost for free
> in there.
>

At the risk of over-optimizing a dead horse, what about:

xorl %ebx, %ebx
movq %ebx, %r10
xorl %r11, %r11
movq %ebx, %r12

etc.

We'll have a cycle of latency from xor to mov, but I'd be rather
surprised if the CPU can't hide that.

\
 
 \ /
  Last update: 2018-02-05 23:11    [W:0.182 / U:0.132 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site