lkml.org 
[lkml]   [2021]   [Mar]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] KVM: nSVM: Optimize L12 to L2 vmcb.save copies
From
Date
On 3/1/21 7:59 PM, Sean Christopherson wrote:
> On Mon, Mar 01, 2021, Cathy Avery wrote:
>> kvm_set_rflags(&svm->vcpu, vmcb12->save.rflags | X86_EFLAGS_FIXED);
>> svm_set_efer(&svm->vcpu, vmcb12->save.efer);
>> svm_set_cr0(&svm->vcpu, vmcb12->save.cr0);
>> svm_set_cr4(&svm->vcpu, vmcb12->save.cr4);
> Why not utilize VMCB_CR?
I was going to tackle CR in a follow up patch. I should have mentioned
that but it makes sense to go ahead and do it now.
>
>> - svm->vcpu.arch.cr2 = vmcb12->save.cr2;
>> + svm->vmcb->save.cr2 = svm->vcpu.arch.cr2 = vmcb12->save.cr2;
> Same question for VMCB_CR2.
>
> Also, isn't writing svm->vmcb->save.cr2 unnecessary since svm_vcpu_run()
> unconditionally writes it?
>
> Alternatively, it shouldn't be too much work to add proper dirty tracking for
> CR2. VMX has to write the real CR2 every time because there's no VMCS field,
> but I assume can avoid the write and dirty update on the majority of VMRUNs.

I 'll take a look at CR2 as well.

Thanks for the feedback,

Cathy

>
>> +
>> kvm_rax_write(&svm->vcpu, vmcb12->save.rax);
>> kvm_rsp_write(&svm->vcpu, vmcb12->save.rsp);
>> kvm_rip_write(&svm->vcpu, vmcb12->save.rip);
>>
>> /* In case we don't even reach vcpu_run, the fields are not updated */
>> - svm->vmcb->save.cr2 = svm->vcpu.arch.cr2;
>> svm->vmcb->save.rax = vmcb12->save.rax;
>> svm->vmcb->save.rsp = vmcb12->save.rsp;
>> svm->vmcb->save.rip = vmcb12->save.rip;
>>
>> - svm->vmcb->save.dr7 = vmcb12->save.dr7 | DR7_FIXED_1;
>> - svm->vcpu.arch.dr6 = vmcb12->save.dr6 | DR6_ACTIVE_LOW;
>> - vmcb_mark_dirty(svm->vmcb, VMCB_DR);
>> + /* These bits will be set properly on the first execution when new_vmc12 is true */
>> + if (unlikely(new_vmcb12 || vmcb_is_dirty(vmcb12, VMCB_DR))) {
>> + svm->vmcb->save.dr7 = vmcb12->save.dr7 | DR7_FIXED_1;
>> + svm->vcpu.arch.dr6 = vmcb12->save.dr6 | DR6_ACTIVE_LOW;
>> + vmcb_mark_dirty(svm->vmcb, VMCB_DR);
>> + }
>> }
>>
>> static void nested_vmcb02_prepare_control(struct vcpu_svm *svm)
>> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
>> index 54610270f66a..9761a7ca8100 100644
>> --- a/arch/x86/kvm/svm/svm.c
>> +++ b/arch/x86/kvm/svm/svm.c
>> @@ -1232,6 +1232,7 @@ static void init_vmcb(struct kvm_vcpu *vcpu)
>> svm->asid = 0;
>>
>> svm->nested.vmcb12_gpa = 0;
>> + svm->nested.last_vmcb12_gpa = 0;
> We should use INVALID_PAGE, '0' is a legal physical address and could
> theoretically get a false negative on the "new_vmcb12" check.
>
>> vcpu->arch.hflags = 0;
>>
>> if (!kvm_pause_in_guest(vcpu->kvm)) {
>> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
>> index fbbb26dd0f73..911868d4584c 100644
>> --- a/arch/x86/kvm/svm/svm.h
>> +++ b/arch/x86/kvm/svm/svm.h
>> @@ -93,6 +93,7 @@ struct svm_nested_state {
>> u64 hsave_msr;
>> u64 vm_cr_msr;
>> u64 vmcb12_gpa;
>> + u64 last_vmcb12_gpa;
>>
>> /* These are the merged vectors */
>> u32 *msrpm;
>> @@ -247,6 +248,11 @@ static inline void vmcb_mark_dirty(struct vmcb *vmcb, int bit)
>> vmcb->control.clean &= ~(1 << bit);
>> }
>>
>> +static inline bool vmcb_is_dirty(struct vmcb *vmcb, int bit)
>> +{
>> + return !test_bit(bit, (unsigned long *)&vmcb->control.clean);
>> +}
>> +
>> static inline struct vcpu_svm *to_svm(struct kvm_vcpu *vcpu)
>> {
>> return container_of(vcpu, struct vcpu_svm, vcpu);
>> --
>> 2.26.2
>>

\
 
 \ /
  Last update: 2021-03-02 17:10    [W:0.060 / U:2.424 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site