lkml.org 
[lkml]   [2020]   [Oct]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.8 070/633] KVM: nVMX: Reset the segment cache when stuffing guest segs
    Date
    From: Sean Christopherson <sean.j.christopherson@intel.com>

    commit fc387d8daf3960c5e1bc18fa353768056f4fd394 upstream.

    Explicitly reset the segment cache after stuffing guest segment regs in
    prepare_vmcs02_rare(). Although the cache is reset when switching to
    vmcs02, there is nothing that prevents KVM from re-populating the cache
    prior to writing vmcs02 with vmcs12's values. E.g. if the vCPU is
    preempted after switching to vmcs02 but before prepare_vmcs02_rare(),
    kvm_arch_vcpu_put() will dereference GUEST_SS_AR_BYTES via .get_cpl()
    and cache the stale vmcs02 value. While the current code base only
    caches stale data in the preemption case, it's theoretically possible
    future code could read a segment register during the nested flow itself,
    i.e. this isn't technically illegal behavior in kvm_arch_vcpu_put(),
    although it did introduce the bug.

    This manifests as an unexpected nested VM-Enter failure when running
    with unrestricted guest disabled if the above preemption case coincides
    with L1 switching L2's CPL, e.g. when switching from a L2 vCPU at CPL3
    to to a L2 vCPU at CPL0. stack_segment_valid() will see the new SS_SEL
    but the old SS_AR_BYTES and incorrectly mark the guest state as invalid
    due to SS.dpl != SS.rpl.

    Don't bother updating the cache even though prepare_vmcs02_rare() writes
    every segment. With unrestricted guest, guest segments are almost never
    read, let alone L2 guest segments. On the other hand, populating the
    cache requires a large number of memory writes, i.e. it's unlikely to be
    a net win. Updating the cache would be a win when unrestricted guest is
    not supported, as guest_state_valid() will immediately cache all segment
    registers. But, nested virtualization without unrestricted guest is
    dirt slow, saving some VMREADs won't change that, and every CPU
    manufactured in the last decade supports unrestricted guest. In other
    words, the extra (minor) complexity isn't worth the trouble.

    Note, kvm_arch_vcpu_put() may see stale data when querying guest CPL
    depending on when preemption occurs. This is "ok" in that the usage is
    imperfect by nature, i.e. it's used heuristically to improve performance
    but doesn't affect functionality. kvm_arch_vcpu_put() could be "fixed"
    by also disabling preemption while loading segments, but that's
    pointless and misleading as reading state from kvm_sched_{in,out}() is
    guaranteed to see stale data in one form or another. E.g. even if all
    the usage of regs_avail is fixed to call kvm_register_mark_available()
    after the associated state is set, the individual state might still be
    stale with respect to the overall vCPU state. I.e. making functional
    decisions in an asynchronous hook is doomed from the get go. Thankfully
    KVM doesn't do that.

    Fixes: de63ad4cf4973 ("KVM: X86: implement the logic for spinlock optimization")
    Cc: stable@vger.kernel.org
    Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
    Message-Id: <20200923184452.980-2-sean.j.christopherson@intel.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    ---
    arch/x86/kvm/vmx/nested.c | 2 ++
    1 file changed, 2 insertions(+)

    --- a/arch/x86/kvm/vmx/nested.c
    +++ b/arch/x86/kvm/vmx/nested.c
    @@ -2402,6 +2402,8 @@ static void prepare_vmcs02_rare(struct v
    vmcs_writel(GUEST_TR_BASE, vmcs12->guest_tr_base);
    vmcs_writel(GUEST_GDTR_BASE, vmcs12->guest_gdtr_base);
    vmcs_writel(GUEST_IDTR_BASE, vmcs12->guest_idtr_base);
    +
    + vmx->segment_cache.bitmask = 0;
    }

    if (!hv_evmcs || !(hv_evmcs->hv_clean_fields &

    \
     
     \ /
      Last update: 2020-10-27 15:57    [W:4.059 / U:0.096 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site