lkml.org 
[lkml]   [2022]   [Mar]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [RFC PATCH 0/6] KVM: SVM: Defer page pinning for SEV guests
From
On 3/7/2022 1:37 AM, Mingwei Zhang wrote:
> On Tue, Jan 18, 2022, Nikunj A Dadhania wrote:
>> SEV guest requires the guest's pages to be pinned in host physical
>> memory as migration of encrypted pages is not supported. The memory
>> encryption scheme uses the physical address of the memory being
>> encrypted. If guest pages are moved by the host, content decrypted in
>> the guest would be incorrect thereby corrupting guest's memory.
>>
>> For SEV/SEV-ES guests, the hypervisor doesn't know which pages are
>> encrypted and when the guest is done using those pages. Hypervisor
>> should treat all the guest pages as encrypted until the guest is
>> destroyed.
> "Hypervisor should treat all the guest pages as encrypted until they are
> deallocated or the guest is destroyed".
>
> Note: in general, the guest VM could ask the user-level VMM to free the
> page by either free the memslot or free the pages (munmap(2)).
>

Sure, will update

>>
>> Actual pinning management is handled by vendor code via new
>> kvm_x86_ops hooks. MMU calls in to vendor code to pin the page on
>> demand. Metadata of the pinning is stored in architecture specific
>> memslot area. During the memslot freeing path guest pages are
>> unpinned.
>
> "During the memslot freeing path and deallocation path"

Sure.

>
>>
>> Initially started with [1], where the idea was to store the pinning
>> information using the software bit in the SPTE to track the pinned
>> page. That is not feasible for the following reason:
>>
>> The pinned SPTE information gets stored in the shadow pages(SP). The
>> way current MMU is designed, the full MMU context gets dropped
>> multiple number of times even when CR0.WP bit gets flipped. Due to
>> dropping of the MMU context (aka roots), there is a huge amount of SP
>> alloc/remove churn. Pinned information stored in the SP gets lost
>> during the dropping of the root and subsequent SP at the child levels.
>> Without this information making decisions about re-pinnning page or
>> unpinning during the guest shutdown will not be possible
>>
>> [1] https://patchwork.kernel.org/project/kvm/cover/20200731212323.21746-1-sean.j.christopherson@intel.com/
>>
>
> A general feedback: I really like this patch set and I think doing
> memory pinning at fault path in kernel and storing the metadata in
> memslot is the right thing to do.
>
> This basically solves all the problems triggered by the KVM based API
> that trusts the user-level VMM to do the memory pinning.
>
Thanks for the feedback.

Regards
Nikunj

\
 
 \ /
  Last update: 2022-03-07 14:04    [W:0.238 / U:0.208 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site