lkml.org 
[lkml]   [2021]   [Sep]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v2 3/4] KVM: SVM: move sev_bind_asid to psp
From
Date


On 9/7/21 6:37 PM, Sean Christopherson wrote:
> On Tue, Sep 07, 2021, Brijesh Singh wrote:
>>
>> On 9/3/21 2:38 PM, Sean Christopherson wrote:
>>> My personal preference is obviously to work towards an abstracted API. And if
>>> we decide to go that route, I think we should be much more aggressive with respect
>>> to what is abstracted. Many of the functions will be rather gross due to the
>>> sheer number of params, but I think the end result will be a net positive in terms
>>> of readability and separation of concerns.
>>>
>>> E.g. get KVM looking like this
>>>
>>> static int sev_receive_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
>>> {
>>> struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
>>> struct kvm_sev_receive_start params;
>>> int ret;
>>>
>>> if (!sev_guest(kvm))
>>> return -ENOTTY;
>>>
>>> /* Get parameter from the userspace */
>>> if (copy_from_user(&params, (void __user *)(uintptr_t)argp->data,
>>> sizeof(struct kvm_sev_receive_start)))
>>> return -EFAULT;
>>>
>>> ret = sev_guest_receive_start(argp->sev_fd, &arpg->error, sev->asid,
>>> &params.handle, params.policy,
>>> params.pdh_uaddr, params.pdh_len,
>>> params.session_uaddr, params.session_len);
>>> if (ret)
>>> return ret;
>>>
>>> /* Copy params back to user even on failure, e.g. for error info. */
>>> if (copy_to_user((void __user *)(uintptr_t)argp->data,
>>> &params, sizeof(struct kvm_sev_receive_start)))
>>> return -EFAULT;
>>>
>>> sev->handle = params.handle;
>>> sev->fd = argp->sev_fd;
>>> return 0;
>>> }
>>>
>>
>> I have no strong preference for either of the abstraction approaches. The
>> sheer number of argument can also make some folks wonder whether such
>> abstraction makes it easy to read. e.g send-start may need up to 11.
>
> Yeah, that's brutal, but IMO having a few ugly functions is an acceptable cost if
> it means the rest of the API is cleaner. E.g. KVM is not the right place to
> implement sev_deactivate_lock, as any coincident DEACTIVATE will be problematic.
> The current code "works" because KVM is the only in-tree user, but even that's a
> bit of a grey area because sev_guest_deactivate() is exported.
>
> If large param lists are problematic, one idea would be to reuse the sev_data_*
> structs for the API. I still don't like the idea of exposing those structs
> outside of the PSP driver, and the potential user vs. kernel pointer confusion
> is more than a bit ugly. On the other hand it's not exactly secret info,
> e.g. KVM's UAPI structs are already excrutiatingly close to sev_data_* structs.
>
> For future ioctls(), KVM could even define UAPI structs that are bit-for-bit
> compatible with the hardware structs. That would allow KVM to copy userspace's
> data directly into a "struct sev_data_*" and simply require the handle and any
> other KVM-defined params to be zero. KVM could then hand the whole struct over
> to the PSP driver for processing.

Most of the address field in the "struct sev_data_*" are physical
addressess. The userspace will not be able to populate those fields. PSP
or KVM may still need to assist filling the final hardware structure.
Some of fields in hardware structure must be zero, so we need to add
checks for it.

I can try posting RFC post SNP series and we can see how it all looks.

>
> We can even do a direct copy to sev_data* with KVM's current UAPI by swapping
> fields as necessary, e.g. swap policy<->handle before and after send-start, but
> that's all kinds of gross and probably not a net positive.
>

\
 
 \ /
  Last update: 2021-09-09 18:08    [W:0.078 / U:0.112 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site