lkml.org 
[lkml]   [2022]   [Oct]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v4 5/5] x86/gsseg: use the LKGS instruction if available for load_gs_index()
From
On 19.10.22 11:50, Xin Li wrote:
> From: "H. Peter Anvin (Intel)" <hpa@zytor.com>
>
> The LKGS instruction atomically loads a segment descriptor into the
> %gs descriptor registers, *except* that %gs.base is unchanged, and the
> base is instead loaded into MSR_IA32_KERNEL_GS_BASE, which is exactly
> what we want this function to do.
>
> Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Signed-off-by: Brian Gerst <brgerst@gmail.com>
> Signed-off-by: Xin Li <xin3.li@intel.com>
> ---
>
> Changes since v3:
> * We want less ASM not more, thus keep local_irq_save/restore() inside
> native_load_gs_index() (Thomas Gleixner).
> * For paravirt enabled kernels, initialize pv_ops.cpu.load_gs_index to
> native_lkgs (Thomas Gleixner).
>
> Changes since V2:
> * Mark DI as input and output (+D) as in V1, since the exception handler
> modifies it (Brian Gerst).
>
> Changes since V1:
> * Use EX_TYPE_ZERO_REG instead of fixup code in the obsolete .fixup code
> section (Peter Zijlstra).
> * Add a comment that states the LKGS_DI macro will be repalced with "lkgs %di"
> once the binutils support the LKGS instruction (Peter Zijlstra).
> ---
> arch/x86/include/asm/gsseg.h | 33 +++++++++++++++++++++++++++++----
> arch/x86/kernel/cpu/common.c | 1 +
> 2 files changed, 30 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/include/asm/gsseg.h b/arch/x86/include/asm/gsseg.h
> index d15577c39e8d..ab6a595cea70 100644
> --- a/arch/x86/include/asm/gsseg.h
> +++ b/arch/x86/include/asm/gsseg.h
> @@ -14,17 +14,42 @@
>
> extern asmlinkage void asm_load_gs_index(u16 selector);
>
> +/* Replace with "lkgs %di" once binutils support LKGS instruction */
> +#define LKGS_DI _ASM_BYTES(0xf2,0x0f,0x00,0xf7)
> +
> +static inline void native_lkgs(unsigned int selector)
> +{
> + u16 sel = selector;
> + asm_inline volatile("1: " LKGS_DI
> + _ASM_EXTABLE_TYPE_REG(1b, 1b, EX_TYPE_ZERO_REG, %k[sel])
> + : [sel] "+D" (sel));
> +}
> +
> static inline void native_load_gs_index(unsigned int selector)
> {
> - unsigned long flags;
> + if (cpu_feature_enabled(X86_FEATURE_LKGS)) {
> + native_lkgs(selector);
> + } else {
> + unsigned long flags;
>
> - local_irq_save(flags);
> - asm_load_gs_index(selector);
> - local_irq_restore(flags);
> + local_irq_save(flags);
> + asm_load_gs_index(selector);
> + local_irq_restore(flags);
> + }
> }
>
> #endif /* CONFIG_X86_64 */
>
> +static inline void __init lkgs_init(void)
> +{
> +#ifdef CONFIG_PARAVIRT_XXL
> +#ifdef CONFIG_X86_64
> + if (cpu_feature_enabled(X86_FEATURE_LKGS))
> + pv_ops.cpu.load_gs_index = native_lkgs;

For this to work correctly when running as a Xen PV guest, you need to add

setup_clear_cpu_cap(X86_FEATURE_LKGS);

to xen_init_capabilities() in arch/x86/xen/enlighten_pv.c, as otherwise
the Xen specific .load_gs_index vector will be overwritten.


Juergen
[unhandled content-type:application/pgp-keys][unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2022-10-19 14:49    [W:0.981 / U:0.036 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site