Messages in this thread | | | Date | Wed, 16 Nov 2022 12:06:54 +0100 | Subject | Re: [PATCH v2] x86/paravirt: use common macro for creating simple asm paravirt functions | From | Juergen Gross <> |
| |
On 16.11.22 12:04, Peter Zijlstra wrote: > On Wed, Nov 09, 2022 at 02:44:18PM +0100, Juergen Gross wrote: >> There are some paravirt assembler functions which are sharing a common >> pattern. Introduce a macro DEFINE_PARAVIRT_ASM() for creating them. >> >> Note that this macro is including explicit alignment of the generated >> functions, leading to __raw_callee_save___kvm_vcpu_is_preempted(), >> _paravirt_nop() and paravirt_ret0() to be aligned at 4 byte boundaries >> now. >> >> The explicit _paravirt_nop() prototype in paravirt.c isn't needed, as >> it is included in paravirt_types.h already. >> >> Signed-off-by: Juergen Gross <jgross@suse.com> >> Reviewed-by: Srivatsa S. Bhat (VMware) <srivatsa@csail.mit.edu> >> --- > > Seems nice; I've made the below little edits, but this is certainly a > bit large for /urgent at this point in time. So how about I merge > locking/urgent into x86/paravirt and munge this on top?
Fine with me.
Thanks for looking at the patch,
Juergen
> > --- > --- a/arch/x86/include/asm/paravirt.h > +++ b/arch/x86/include/asm/paravirt.h > @@ -737,7 +737,7 @@ static __always_inline unsigned long arc > __ALIGN_STR "\n" \ > #func ":\n\t" \ > ASM_ENDBR \ > - instr \ > + instr "\n\t" \ > ASM_RET \ > ".size " #func ", . - " #func "\n\t" \ > ".popsection") > --- a/arch/x86/include/asm/qspinlock_paravirt.h > +++ b/arch/x86/include/asm/qspinlock_paravirt.h > @@ -54,8 +54,8 @@ __PV_CALLEE_SAVE_REGS_THUNK(__pv_queued_ > "pop %rdx\n\t" \ > FRAME_END > > -DEFINE_PARAVIRT_ASM(__raw_callee_save___pv_queued_spin_unlock, PV_UNLOCK_ASM, > - .spinlock.text); > +DEFINE_PARAVIRT_ASM(__raw_callee_save___pv_queued_spin_unlock, > + PV_UNLOCK_ASM, .spinlock.text); > > #else /* CONFIG_64BIT */ > > --- a/arch/x86/kernel/kvm.c > +++ b/arch/x86/kernel/kvm.c > @@ -802,6 +802,7 @@ extern bool __raw_callee_save___kvm_vcpu > "movq __per_cpu_offset(,%rdi,8), %rax\n\t" \ > "cmpb $0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax)\n\t" \ > "setne %al\n\t" > + > DEFINE_PARAVIRT_ASM(__raw_callee_save___kvm_vcpu_is_preempted, > PV_VCPU_PREEMPTED_ASM, .text); > #endif > --- a/arch/x86/kernel/paravirt.c > +++ b/arch/x86/kernel/paravirt.c > @@ -40,8 +40,7 @@ > DEFINE_PARAVIRT_ASM(_paravirt_nop, "", .entry.text); > > /* stub always returning 0. */ > -#define PV_RET0_ASM "xor %" _ASM_AX ", %" _ASM_AX "\n\t" > -DEFINE_PARAVIRT_ASM(paravirt_ret0, PV_RET0_ASM, .entry.text); > +DEFINE_PARAVIRT_ASM(paravirt_ret0, "xor %eax,%eax", .entry.text); > > void __init default_banner(void) > {
[unhandled content-type:application/pgp-keys][unhandled content-type:application/pgp-signature] | |