lkml.org 
[lkml]   [2019]   [Dec]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v2] KVM: x86: use CPUID to locate host page table reserved bits
From
Date
On 04/12/19 16:57, Tom Lendacky wrote:
> On 12/4/19 9:40 AM, Paolo Bonzini wrote:
>> The comment in kvm_get_shadow_phys_bits refers to MKTME, but the same is actually
>> true of SME and SEV. Just use CPUID[0x8000_0008].EAX[7:0] unconditionally if
>> available, it is simplest and works even if memory is not encrypted.
>
> This isn't correct for AMD. The reduction in physical addressing is
> correct. You can't set, e.g. bit 45, in the nested page table, because
> that will be considered a reserved bit and generate an NPF. When memory
> encryption is enabled today, bit 47 is the encryption indicator bit and
> bits 46:43 must be zero or else an NPF is generated. The hardware uses
> these bits internally based on the whether running as the hypervisor or
> based on the ASID of the guest.

kvm_get_shadow_phys_bits() must be conservative in that:

1) if a bit is reserved it _can_ return a value higher than its index

2) if a bit is used by the processor (for physical address or anything
else) it _must_ return a value higher than its index.

In the SEV case we're not obeying (2), because the function returns 43
when the C bit is bit 47. The patch fixes that.

Paolo

>
> Thanks,
> Tom
>
>>
>> Cc: stable@vger.kernel.org
>> Reported-by: Tom Lendacky <thomas.lendacky@amd.com>
>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>> ---
>> arch/x86/kvm/mmu/mmu.c | 20 ++++++++++++--------
>> 1 file changed, 12 insertions(+), 8 deletions(-)
>>
>> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
>> index 6f92b40d798c..1e4ee4f8de5f 100644
>> --- a/arch/x86/kvm/mmu/mmu.c
>> +++ b/arch/x86/kvm/mmu/mmu.c
>> @@ -538,16 +538,20 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask,
>> static u8 kvm_get_shadow_phys_bits(void)
>> {
>> /*
>> - * boot_cpu_data.x86_phys_bits is reduced when MKTME is detected
>> - * in CPU detection code, but MKTME treats those reduced bits as
>> - * 'keyID' thus they are not reserved bits. Therefore for MKTME
>> - * we should still return physical address bits reported by CPUID.
>> + * boot_cpu_data.x86_phys_bits is reduced when MKTME or SME are detected
>> + * in CPU detection code, but the processor treats those reduced bits as
>> + * 'keyID' thus they are not reserved bits. Therefore KVM needs to look at
>> + * the physical address bits reported by CPUID.
>> */
>> - if (!boot_cpu_has(X86_FEATURE_TME) ||
>> - WARN_ON_ONCE(boot_cpu_data.extended_cpuid_level < 0x80000008))
>> - return boot_cpu_data.x86_phys_bits;
>> + if (likely(boot_cpu_data.extended_cpuid_level >= 0x80000008))
>> + return cpuid_eax(0x80000008) & 0xff;
>>
>> - return cpuid_eax(0x80000008) & 0xff;
>> + /*
>> + * Quite weird to have VMX or SVM but not MAXPHYADDR; probably a VM with
>> + * custom CPUID. Proceed with whatever the kernel found since these features
>> + * aren't virtualizable (SME/SEV also require CPUIDs higher than 0x80000008).
>> + */
>> + return boot_cpu_data.x86_phys_bits;
>> }
>>
>> static void kvm_mmu_reset_all_pte_masks(void)
>>
>

\
 
 \ /
  Last update: 2019-12-10 10:18    [W:0.058 / U:0.324 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site