lkml.org 
[lkml]   [2019]   [Mar]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 3/5] x86/kvm: Convert some slow-path static_cpu_has() callers to boot_cpu_has()
On Sun, Mar 31, 2019 at 04:20:11PM +0200, Paolo Bonzini wrote:
> These are not slow path.

Those functions do a *lot* of stuff like a bunch of MSR reads which are
tens of cycles each at least.

I don't think a RIP-relative MOV and a BT:

movq boot_cpu_data+20(%rip), %rax # MEM[(const long unsigned int *)&boot_cpu_data + 20B], _45
btq $59, %rax #, _45

are at all noticeable.

On latest AMD and Intel uarch those are 2-4 cycles, according to

https://agner.org/optimize/instruction_tables.ods

--
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

\
 
 \ /
  Last update: 2019-03-31 17:13    [W:1.147 / U:0.248 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site