lkml.org 
[lkml]   [2021]   [Mar]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] x86/cpu/AMD: Adjust x86_phys_bits to account for reduced PA in SEV-* guests
On Wed, Mar 17, 2021, Borislav Petkov wrote:
> On Wed, Mar 17, 2021 at 11:32:43AM -0700, Sean Christopherson wrote:
> > Note, early kernel boot code for SEV-*, e.g. get_sev_encryption_bit(),
> > _requires_ the SEV feature flag to be set in CPUID in order to identify
> > SEV (this requirement comes from the SEV-ES GHCB standard). But, that
> > requirement does not mean the kernel must also "advertise" SEV in its own
> > CPU features array.
>
> Sure it does - /proc/cpuinfo contains feature bits of stuff which has
> been enabled in the kernel. And when it comes to SEV, yeah, that was a
> lot of enablement. :-)

Ha, all I'm saying is that /proc/cpuinfo doesn't have to match the GHCB spec.

> > Fixes: d8aa7eea78a1 ("x86/mm: Add Secure Encrypted Virtualization (SEV) support")
> > Cc: stable@vger.kernel.org
> > Cc: Joerg Roedel <joro@8bytes.org>
> > Cc: Tom Lendacky <thomas.lendacky@amd.com>
> > Cc: Brijesh Singh <brijesh.singh@amd.com>
> > Cc: Peter Gonda <pgonda@google.com>
> > Signed-off-by: Sean Christopherson <seanjc@google.com>
> > ---
> >
> > Regarding clearing SME, SEV, SEV_ES, etc..., it's obviously not required,
> > but to avoid false postives, identifying "SEV guest" within the kernel
> > must be done with sev_active(). And if we want to display support in
> > /proc/cpuinfo, IMO it should be a separate synthetic feature so that
> > userspace sees "sev_guest" instead of "sev".
>
> I'm on the fence here, frankly. We issue capabilities in the guest dmesg
> in print_mem_encrypt_feature_info(). However, if someone wants to query
> SEV* status in the guest, then I don't have a good suggestion where to
> put it. cpuinfo is probably ok-ish, a new /sys/devices/system/cpu/caps/
> or so, should work too, considering the vuln stuff we stuck there so we
> can extend that. We'll see.
>
> >
> > arch/x86/kernel/cpu/amd.c | 32 ++++++++++++++++++++++++++++----
> > 1 file changed, 28 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
> > index 2d11384dc9ab..0f7f8c905226 100644
> > --- a/arch/x86/kernel/cpu/amd.c
> > +++ b/arch/x86/kernel/cpu/amd.c
> > @@ -15,6 +15,7 @@
> > #include <asm/cpu.h>
> > #include <asm/spec-ctrl.h>
> > #include <asm/smp.h>
> > +#include <asm/mem_encrypt.h>
> > #include <asm/numa.h>
> > #include <asm/pci-direct.h>
> > #include <asm/delay.h>
> > @@ -575,10 +576,33 @@ static void bsp_init_amd(struct cpuinfo_x86 *c)
> > resctrl_cpu_detect(c);
> > }
> >
> > +#define SEV_CBIT_MSG "SEV: C-bit (bit %d), overlaps MAXPHYADDR (%d bits). VMM is buggy or malicious, overriding MAXPHYADDR to %d.\n"
>
> Not sure about that. This will make a lot of users run scared, not
> knowing what's going on and open bugzillas.

Yeah, I'm not too sure about it either. I would not object to dropping it to
a pr_info or pr_warn, and/or removing the "VMM is buggy or malicious" snippet.

> > +
> > static void early_detect_mem_encrypt(struct cpuinfo_x86 *c)
> > {
> > u64 msr;
> >
> > + /*
> > + * When running as an SEV guest of any flavor, update the physical
> > + * address width to account for the C-bit and clear all of the SME/SVE
> > + * feature flags. As far as the kernel is concerned, the SEV flags
> > + * enumerate what features can be used by the kernel/KVM, not what
> > + * features have been activated by the VMM.
> > + */
> > + if (sev_active()) {
> > + int c_bit = ilog2(sme_me_mask);
> > +
> > + BUG_ON(!sme_me_mask);
> > +
> > + c->x86_phys_bits -= (cpuid_ebx(0x8000001f) >> 6) & 0x3f;
>
> Well, if that leaf is intercepted, how do you wanna trust this at all?

That's a good question for the AMD folks. CPUID.0x80000008 and thus the original
x86_phys_bits is also untrusted.

> IOW, you have c_bit so your valid address space is [0 .. c_bit-1] no?

I haven't found anything in the GHCB that dictates that MAXPHYADDR == C_BIT-1,
or more specifically that MAXPHYADDR == C_BIT - PhysAddrReduction. E.g. AFAICT,
a VMM could do C_BIT=47, MAXPHYADDR=36, PhysAddrReduction=0, and that would be
allowed by the GHCB.

Forcing "c->x86_phys_bits = c_bit - 1" doesn't seem like it would break anything,
but it's also technically wrong.

\
 
 \ /
  Last update: 2021-03-17 20:44    [W:0.061 / U:0.280 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site