lkml.org 
[lkml]   [2023]   [Feb]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v9 22/27] virt: gunyah: Add resource tickets
    * Elliot Berman <quic_eberman@quicinc.com> [2023-01-20 14:46:21]:

    > +int ghvm_add_resource_ticket(struct gunyah_vm *ghvm, struct gunyah_vm_resource_ticket *ticket)
    > +{
    > + struct gunyah_vm_resource_ticket *iter;
    > + struct gunyah_resource *ghrsc;
    > + int ret = 0;
    > +
    > + mutex_lock(&ghvm->resources_lock);
    > + list_for_each_entry(iter, &ghvm->resource_tickets, list) {
    > + if (iter->resource_type == ticket->resource_type && iter->label == ticket->label) {
    > + ret = -EEXIST;
    > + goto out;
    > + }
    > + }
    > +
    > + if (!try_module_get(ticket->owner)) {
    > + ret = -ENODEV;
    > + goto out;
    > + }
    > +
    > + list_add(&ticket->list, &ghvm->resource_tickets);
    > + INIT_LIST_HEAD(&ticket->resources);
    > +
    > + list_for_each_entry(ghrsc, &ghvm->resources, list) {
    > + if (ghrsc->type == ticket->resource_type && ghrsc->rm_label == ticket->label) {
    > + if (!ticket->populate(ticket, ghrsc))
    > + list_move(&ghrsc->list, &ticket->resources);

    Do we need the search to continue in case of a hit? 'gh_vm_add_resource' seems to
    break loop on first occurrence.

    Also do we have examples of more than one 'gunyah_resource' being associated
    with same 'gunyah_vm_resource_ticket'? Both vcpu and irqfd tickets seem to deal
    with just one resource?

    > static int gh_vm_start(struct gunyah_vm *ghvm)
    > {
    > struct gunyah_vm_memory_mapping *mapping;
    > u64 dtb_offset;
    > u32 mem_handle;
    > - int ret;
    > + struct gunyah_resource *ghrsc;
    > + struct gh_rm_hyp_resources *resources;
    > + int ret, i;
    >
    > down_write(&ghvm->status_lock);
    > if (ghvm->vm_status != GH_RM_VM_STATUS_NO_STATE) {
    > @@ -241,6 +314,22 @@ static int gh_vm_start(struct gunyah_vm *ghvm)
    > goto err;
    > }
    >
    > + ret = gh_rm_get_hyp_resources(ghvm->rm, ghvm->vmid, &resources);
    > + if (ret) {
    > + pr_warn("Failed to get hypervisor resources for VM: %d\n", ret);
    > + goto err;
    > + }
    > +
    > + for (i = 0; i < le32_to_cpu(resources->n_entries); i++) {

    minor nit: not sure if we can rely on compiler to optimize this, but it would
    be better if we run le32_to_cpu once and use the result in loop.

    > + ghrsc = gh_rm_alloc_resource(ghvm->rm, &resources->entries[i]);
    > + if (!ghrsc) {
    > + ret = -ENOMEM;
    > + goto err;
    > + }
    > +
    > + gh_vm_add_resource(ghvm, ghrsc);

    Shouldn't we have gh_vm_add_resource()-> ticket->populate() return a result and
    in case of failure we should bail out from this loop?

    > + }
    > +
    > ret = gh_rm_vm_start(ghvm->rm, ghvm->vmid);
    > if (ret) {
    > pr_warn("Failed to start VM: %d\n", ret);

    \
     
     \ /
      Last update: 2023-03-27 00:10    [W:3.260 / U:0.020 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site