lkml.org 
[lkml]   [2021]   [Jul]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH Part2 RFC v4 25/40] KVM: SVM: Reclaim the guest pages when SEV-SNP VM terminates
From
Date

On 7/16/21 3:09 PM, Sean Christopherson wrote:
> On Wed, Jul 07, 2021, Brijesh Singh wrote:
>> The guest pages of the SEV-SNP VM maybe added as a private page in the
>> RMP entry (assigned bit is set). The guest private pages must be
>> transitioned to the hypervisor state before its freed.
> Isn't this patch needed much earlier in the series, i.e. when the first RMPUPDATE
> usage goes in?

Yes, the first RMPUPDATE usage is in the LAUNCH_UPDATE patch and this
should be squashed in that patch.


>> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
>> ---
>> arch/x86/kvm/svm/sev.c | 39 +++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 39 insertions(+)
>>
>> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
>> index 1f0635ac9ff9..4468995dd209 100644
>> --- a/arch/x86/kvm/svm/sev.c
>> +++ b/arch/x86/kvm/svm/sev.c
>> @@ -1940,6 +1940,45 @@ find_enc_region(struct kvm *kvm, struct kvm_enc_region *range)
>> static void __unregister_enc_region_locked(struct kvm *kvm,
>> struct enc_region *region)
>> {
>> + struct rmpupdate val = {};
>> + unsigned long i, pfn;
>> + struct rmpentry *e;
>> + int level, rc;
>> +
>> + /*
>> + * The guest memory pages are assigned in the RMP table. Unassign it
>> + * before releasing the memory.
>> + */
>> + if (sev_snp_guest(kvm)) {
>> + for (i = 0; i < region->npages; i++) {
>> + pfn = page_to_pfn(region->pages[i]);
>> +
>> + if (need_resched())
>> + schedule();
> This can simply be "cond_resched();"

Yes.


>
>> +
>> + e = snp_lookup_page_in_rmptable(region->pages[i], &level);
>> + if (unlikely(!e))
>> + continue;
>> +
>> + /* If its not a guest assigned page then skip it. */
>> + if (!rmpentry_assigned(e))
>> + continue;
>> +
>> + /* Is the page part of a 2MB RMP entry? */
>> + if (level == PG_LEVEL_2M) {
>> + val.pagesize = RMP_PG_SIZE_2M;
>> + pfn &= ~(KVM_PAGES_PER_HPAGE(PG_LEVEL_2M) - 1);
>> + } else {
>> + val.pagesize = RMP_PG_SIZE_4K;
> This raises yet more questions (for me) as to the interaction between Page-Size
> and Hyperivsor-Owned flags in the RMP. It also raises questions on the correctness
> of zeroing the RMP entry if KVM_SEV_SNP_LAUNCH_START (in the previous patch).

I assume you mean the LAUNCH_UPDATE because that's when we need to
perform the RMPUPDATE. The hypervisor owned means all zero in the RMP entry.


>> + }
>> +
>> + /* Transition the page to hypervisor owned. */
>> + rc = rmpupdate(pfn_to_page(pfn), &val);
>> + if (rc)
>> + pr_err("Failed to release pfn 0x%lx ret=%d\n", pfn, rc);
> This is not robust, e.g. KVM will unpin the memory and release it back to the
> kernel with a stale RMP entry. Shouldn't this be a WARN+leak situation?

Yes. Maybe we should increase the page refcount to ensure that this page
is not reused after the process is terminated ?


>> + }
>> + }
>> +
>> sev_unpin_memory(kvm, region->pages, region->npages);
>> list_del(&region->list);
>> kfree(region);
>> --
>> 2.17.1
>>

\
 
 \ /
  Last update: 2021-07-17 00:17    [W:0.289 / U:0.780 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site