lkml.org 
[lkml]   [2021]   [Jul]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [PATCH Part2 RFC v4 26/40] KVM: SVM: Add KVM_SEV_SNP_LAUNCH_FINISH command
    From
    Date

    On 7/16/21 3:18 PM, Sean Christopherson wrote:
    > On Wed, Jul 07, 2021, Brijesh Singh wrote:
    >> + struct kvm_sev_snp_launch_finish {
    >> + __u64 id_block_uaddr;
    >> + __u64 id_auth_uaddr;
    >> + __u8 id_block_en;
    >> + __u8 auth_key_en;
    >> + __u8 host_data[32];
    > Pad this one too?

    Noted.


    >
    >> + };
    >> +
    >> +
    >> +See SEV-SNP specification for further details on launch finish input parameters.
    > ...
    >
    >> + data->gctx_paddr = __psp_pa(sev->snp_context);
    >> + ret = sev_issue_cmd(kvm, SEV_CMD_SNP_LAUNCH_FINISH, data, &argp->error);
    > Shouldn't KVM unwind everything it did if LAUNCH_FINISH fails? And if that's
    > not possible, take steps to make the VM unusable?

    Well, I am not sure if VM need to unwind. If the command fail but VMM
    decide to ignore the error then VMRUN will probably fail and user will
    get the KVM shutdown event. The LAUNCH_FINISH command finalizes the VM
    launch process, the firmware will probably not load the memory
    encryption keys until it moves to the running state.


    >> +
    >> + kfree(id_auth);
    >> +
    >> +e_free_id_block:
    >> + kfree(id_block);
    >> +
    >> +e_free:
    >> + kfree(data);
    >> +
    >> + return ret;
    >> +}
    >> +
    > ...
    >
    >> @@ -2346,8 +2454,25 @@ void sev_free_vcpu(struct kvm_vcpu *vcpu)
    >>
    >> if (vcpu->arch.guest_state_protected)
    >> sev_flush_guest_memory(svm, svm->vmsa, PAGE_SIZE);
    >> +
    >> + /*
    >> + * If its an SNP guest, then VMSA was added in the RMP entry as a guest owned page.
    >> + * Transition the page to hyperivosr state before releasing it back to the system.
    > "hyperivosr" typo. And please wrap at 80 chars.

    Noted.


    >
    >> + */
    >> + if (sev_snp_guest(vcpu->kvm)) {
    >> + struct rmpupdate e = {};
    >> + int rc;
    >> +
    >> + rc = rmpupdate(virt_to_page(svm->vmsa), &e);
    > So why does this not need to go through snp_page_reclaim()?

    As I said in previous comments that by default all the memory is in the
    hypervisor state. if the rmpupdate() failed that means nothing is
    changed in the RMP and there is no need to reclaim. The reclaim is
    required only if the pages are assigned in the RMP table.


    >
    >> + if (rc) {
    >> + pr_err("Failed to release SNP guest VMSA page (rc %d), leaking it\n", rc);
    > Seems like a WARN would be simpler. But the more I see the rmpupdate(..., {0})
    > pattern, the more I believe that nuking an RMP entry needs a dedicated helper.


    Yes, let me try coming up with helper for it.


    >
    >> + goto skip_vmsa_free;

    \
     
     \ /
      Last update: 2021-07-17 00:49    [W:3.000 / U:0.824 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site