lkml.org 
[lkml]   [2023]   [Feb]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: "KVM: x86/mmu: Overhaul TDP MMU zapping and flushing" breaks SVM on Hyper-V
On 2/13/23 19:05, Jeremi Piotrowski wrote:
> So I looked at the ftrace (all kvm&kvmmu events + hyperv_nested_*
> events) I see the following: With tdp_mmu=0: kvm_exit sequence of
> kvm_mmu_prepare_zap_page hyperv_nested_flush_guest_mapping (always
> follows every sequence of kvm_mmu_prepare_zap_page) kvm_entry
>
> With tdp_mmu=1 I see: kvm_mmu_prepare_zap_page and
> kvm_tdp_mmu_spte_changed events from a kworker context, but they are
> not followed by hyperv_nested_flush_guest_mapping. The only
> hyperv_nested_flush_guest_mapping events I see happen from the qemu
> process context.
>
> Also the number of flush hypercalls is significantly lower: a 7second
> sequence through OVMF with tdp_mmu=0 produces ~270 flush hypercalls.
> In the traces with tdp_mmu=1 I now see max 3.
>
> So this might be easier to diagnose than I thought: the
> HvCallFlushGuestPhysicalAddressSpace calls are missing now.

Can you check if KVM is reusing a nCR3 value?

If so, perhaps you can just add
hyperv_flush_guest_mapping(__pa(root->spt), NULL) after
kvm_tdp_mmu_get_vcpu_root_hpa's call to tdp_mmu_alloc_sp()?

Paolo

\
 
 \ /
  Last update: 2023-03-27 00:20    [W:0.058 / U:0.436 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site