lkml.org 
[lkml]   [2019]   [Oct]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH v3] x86/hyper-v: micro-optimize send_ipi_one case
Date
On Sun, Oct 27, 2019 at 04:19:38PM +0100, Vitaly Kuznetsov wrote:
> When sending an IPI to a single CPU there is no need to deal with cpumasks.
> With 2 CPU guest on WS2019 I'm seeing a minor (like 3%, 8043 -> 7761 CPU
> cycles) improvement with smp_call_function_single() loop benchmark. The
> optimization, however, is tiny and straitforward. Also, send_ipi_one() is
> important for PV spinlock kick.
>
> I was also wondering if it would make sense to switch to using regular
> APIC IPI send for CPU > 64 case but no, it is twice as expesive (12650 CPU
> cycles for __send_ipi_mask_ex() call, 26000 for orig_apic.send_IPI(cpu,
> vector)).
>
> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> ---
> Changes since v2:
> - Check VP number instead of CPU number against >= 64 [Michael]
> - Check for VP_INVAL
> ---
> arch/x86/hyperv/hv_apic.c | 16 +++++++++++++---
> arch/x86/include/asm/trace/hyperv.h | 15 +++++++++++++++
> 2 files changed, 28 insertions(+), 3 deletions(-)

Reviewed-by: Roman Kagan <rkagan@virtuozzo.com>

\
 
 \ /
  Last update: 2019-10-28 10:36    [W:0.191 / U:0.268 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site