lkml.org 
[lkml]   [2022]   [Jul]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRE: [Patch v4 12/12] RDMA/mana_ib: Add a driver for Microsoft Azure Network Adapter
    Date
    Please see comments inline 

    -----Original Message-----
    From: Dexuan Cui <decui@microsoft.com>
    Sent: Sunday, July 10, 2022 8:43 PM
    To: Long Li <longli@microsoft.com>; KY Srinivasan <kys@microsoft.com>; Haiyang Zhang <haiyangz@microsoft.com>; Stephen Hemminger <sthemmin@microsoft.com>; Wei Liu <wei.liu@kernel.org>; David S. Miller <davem@davemloft.net>; Jakub Kicinski <kuba@kernel.org>; Paolo Abeni <pabeni@redhat.com>; Jason Gunthorpe <jgg@ziepe.ca>; Leon Romanovsky <leon@kernel.org>; edumazet@google.com; shiraz.saleem@intel.com; Ajay Sharma <sharmaajay@microsoft.com>
    Cc: linux-hyperv@vger.kernel.org; netdev@vger.kernel.org; linux-kernel@vger.kernel.org; linux-rdma@vger.kernel.org
    Subject: RE: [Patch v4 12/12] RDMA/mana_ib: Add a driver for Microsoft Azure Network Adapter

    > From: longli@linuxonhyperv.com <longli@linuxonhyperv.com>
    > Sent: Wednesday, June 15, 2022 7:07 PM ...
    > +int mana_ib_gd_create_dma_region(struct mana_ib_dev *dev, struct
    > ib_umem *umem,
    > + mana_handle_t *gdma_region, u64 page_sz) { ...
    > + err = mana_gd_send_request(gc, create_req_msg_size, create_req,
    > + sizeof(create_resp), &create_resp);
    > + kfree(create_req);
    > +
    > + if (err || create_resp.hdr.status) {
    > + ibdev_err(&dev->ib_dev,
    > + "Failed to create DMA region: %d, 0x%x\n", err,
    > + create_resp.hdr.status);

    if (!err)
    err = -EPROTO;

    > + goto error;
    > + }
    > + ...
    > + err = mana_gd_send_request(gc, add_req_msg_size,
    > + add_req, sizeof(add_resp),
    > + &add_resp);
    > + if (!err || add_resp.hdr.status != expected_status) {
    > + ibdev_err(&dev->ib_dev,
    > + "Failed put DMA pages %u: %d,0x%x\n",
    > + i, err, add_resp.hdr.status);
    > + err = -EPROTO;

    Should we try to undo what has been done by calling GDMA_DESTROY_DMA_REGION?
    Yes, I updated the patch.

    > + goto free_req;
    > + }
    > +
    > + num_pages_cur += num_pages_to_handle;
    > + num_pages_to_handle =
    > + min_t(size_t, num_pages_total - num_pages_cur,
    > + max_pgs_add_cmd);
    > + add_req_msg_size = sizeof(*add_req) +
    > + num_pages_to_handle * sizeof(u64);
    > + }
    > +free_req:
    > + kfree(add_req);
    > + }
    > +
    > +error:
    > + return err;
    > +}
    > + ...
    > +int mana_ib_gd_create_mr(struct mana_ib_dev *dev, struct mana_ib_mr
    > *mr,
    > + struct gdma_create_mr_params *mr_params) {
    > + struct gdma_create_mr_response resp = {};
    > + struct gdma_create_mr_request req = {};
    > + struct gdma_dev *mdev = dev->gdma_dev;
    > + struct gdma_context *gc;
    > + int err;
    > +
    > + gc = mdev->gdma_context;
    > +
    > + mana_gd_init_req_hdr(&req.hdr, GDMA_CREATE_MR, sizeof(req),
    > + sizeof(resp));
    > + req.pd_handle = mr_params->pd_handle;
    > +
    > + switch (mr_params->mr_type) {
    > + case GDMA_MR_TYPE_GVA:
    > + req.mr_type = GDMA_MR_TYPE_GVA;
    > + req.gva.dma_region_handle = mr_params->gva.dma_region_handle;
    > + req.gva.virtual_address = mr_params->gva.virtual_address;
    > + req.gva.access_flags = mr_params->gva.access_flags;
    > + break;
    > +
    > + case GDMA_MR_TYPE_GPA:
    > + req.mr_type = GDMA_MR_TYPE_GPA;
    > + req.gpa.access_flags = mr_params->gpa.access_flags;
    > + break;
    > +
    > + case GDMA_MR_TYPE_FMR:
    > + req.mr_type = GDMA_MR_TYPE_FMR;
    > + req.fmr.page_size = mr_params->fmr.page_size;
    > + req.fmr.reserved_pte_count = mr_params->fmr.reserved_pte_count;
    > + break;
    > +
    > + default:
    > + ibdev_dbg(&dev->ib_dev,
    > + "invalid param (GDMA_MR_TYPE) passed, type %d\n",
    > + req.mr_type);

    Here req.mr_type is always 0.
    We should remove the 3 above lines of "req.mr_type = ...", and add a line "req.mr_type = mr_params->mr_type;" before the "switch" line..

    No, That's incorrect. The mr_type is being explicitly set here to control what regions get exposed to the user and kernel. GPA and FMR are never exposed to user. So we cannot assign req.mr_type = mr_params->mr_type.

    > + err = -EINVAL;
    > + goto error;
    > + }
    > +
    > + err = mana_gd_send_request(gc, sizeof(req), &req, sizeof(resp),
    > +&resp);
    > +
    > + if (err || resp.hdr.status) {
    > + ibdev_err(&dev->ib_dev, "Failed to create mr %d, %u", err,
    > + resp.hdr.status);

    if (!err)
    err = -EPROTO;

    > + goto error;
    > + }
    > +
    > + mr->ibmr.lkey = resp.lkey;
    > + mr->ibmr.rkey = resp.rkey;
    > + mr->mr_handle = resp.mr_handle;
    > +
    > + return 0;
    > +error:
    > + return err;
    > +}
    > + ...
    > +static int mana_ib_probe(struct auxiliary_device *adev,
    > + const struct auxiliary_device_id *id) {
    > + struct mana_adev *madev = container_of(adev, struct mana_adev, adev);
    > + struct gdma_dev *mdev = madev->mdev;
    > + struct mana_context *mc;
    > + struct mana_ib_dev *dev;
    > + int ret = 0;
    No need to initialize 'ret' to 0.
    Agreed. Updated the patch.

    > +int mana_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) {
    > + struct mana_ib_mr *mr = container_of(ibmr, struct mana_ib_mr, ibmr);
    > + struct ib_device *ibdev = ibmr->device;
    > + struct mana_ib_dev *dev;
    > + int err;
    > +
    > + dev = container_of(ibdev, struct mana_ib_dev, ib_dev);
    > +
    > + err = mana_ib_gd_destroy_mr(dev, mr->mr_handle);
    > + if (err)

    Should we return here without calling ib_umem_release() and kfree(mr)?
    Yes, if the device fails to deallocate the resources and we release them back to kernel it will lead to unexpected results.

    > + return err;

    > +
    > + if (mr->umem)
    > + ib_umem_release(mr->umem);
    > +
    > + kfree(mr);
    > +
    > + return 0;
    > +}

    \
     
     \ /
      Last update: 2022-07-13 06:35    [W:5.265 / U:0.588 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site