lkml.org 
[lkml]   [2022]   [May]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3] x86/speculation, KVM: only IBPB for switch_mm_always_ibpb on vCPU load
On Tue, May 10, 2022, Jon Kohler wrote:
>
> > On May 10, 2022, at 10:44 AM, Sean Christopherson <seanjc@google.com> wrote:
> >
> > On Sat, Apr 30, 2022, Borislav Petkov wrote:
> >> But I'm likely missing a virt aspect here so I'd let Sean explain what
> >> the rules are...
> >
> > I don't think you're missing anything. I think the original 15d45071523d ("KVM/x86:
> > Add IBPB support") was simply wrong.
> >
> > As I see it:
> >
> > 1. If the vCPUs belong to the same VM, they are in the same security domain and
> > do not need an IPBP.
> >
> > 2. If the vCPUs belong to different VMs, and each VM is in its own mm_struct,
> > defer to switch_mm_irqs_off() to handle IBPB as an mm switch is guaranteed to
> > occur prior to loading a vCPU belonging to a different VMs.
> >
> > 3. If the vCPUs belong to different VMs, but multiple VMs share an mm_struct,
> > then the security benefits of an IBPB when switching vCPUs are dubious, at best.
> >
> > If we only consider #1 and #2, then KVM doesn't need an IBPB, period.
> >
> > #3 is the only one that's a grey area. I have no objection to omitting IBPB entirely
> > even in that case, because none of us can identify any tangible value in doing so.
>
> Thanks, Sean. Our messages crossed in flight, I sent a reply to your earlier message
> just a bit ago. This is super helpful to frame this up.
>
> What would you think framing the patch like this:
>
> x86/speculation, KVM: remove IBPB on vCPU load
>
> Remove IBPB that is done on KVM vCPU load, as the guest-to-guest
> attack surface is already covered by switch_mm_irqs_off() ->
> cond_mitigation().
>
> The original 15d45071523d ("KVM/x86: Add IBPB support") was simply wrong in
> its guest-to-guest design intention. There are three scenarios at play
> here:
>
> 1. If the vCPUs belong to the same VM, they are in the same security
> domain and do not need an IPBP.
> 2. If the vCPUs belong to different VMs, and each VM is in its own mm_struct,
> switch_mm_irqs_off() will handle IBPB as an mm switch is guaranteed to
> occur prior to loading a vCPU belonging to a different VMs.
> 3. If the vCPUs belong to different VMs, but multiple VMs share an mm_struct,
> then the security benefits of an IBPB when switching vCPUs are dubious,
> at best.
>
> Issuing IBPB from KVM vCPU load would only cover #3, but there are no

Just to hedge, there are no _known_ use cases.

> real world tangible use cases for such a configuration.

and I would further qualify this with:

but there are no known real world, tangible use cases for running multiple
VMs belonging to different security domains in a shared address space.

Running multiple VMs in a single address space is plausible and sane, _if_ they
are all in the same security domain or security is not a concern. That way the
statement isn't invalidated if someone pops up with a use case for running multiple
VMs but has no security story.

Other than that, LGTM.

> If multiple VMs
> are sharing an mm_structs, prediction attacks are the least of their
> security worries.
>
> Fixes: 15d45071523d ("KVM/x86: Add IBPB support")
> (Reviewedby/signed of by people here)
> (Code change simply whacks IBPB in KVM vmx/svm and thats it)
>
>
>

\
 
 \ /
  Last update: 2022-05-10 17:59    [W:0.135 / U:1.084 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site