Messages in this thread Patch in this message | | | Subject | Re: [PATCH] RDMA/core: reduce IB_POLL_BATCH constant | From | Sagi Grimberg <> | Date | Wed, 21 Feb 2018 15:44:40 +0200 |
| |
>> On Tue, 2018-02-20 at 21:59 +0100, Arnd Bergmann wrote: >>> /* # of WCs to poll for with a single call to ib_poll_cq */ >>> -#define IB_POLL_BATCH 16 >>> +#define IB_POLL_BATCH 8 >> >> The purpose of batch polling is to minimize contention on the cq spinlock. >> Reducing the IB_POLL_BATCH constant may affect performance negatively. Has >> the performance impact of this change been verified for all affected drivers >> (ib_srp, ib_srpt, ib_iser, ib_isert, NVMeOF, NVMeOF target, SMB Direct, NFS >> over RDMA, ...)? > > Only the users of the DIRECT polling method use an on-stack > array of ib_wc's. This is only the SRP drivers. > > The other two modes have use of a dynamically allocated array > of ib_wc's that hangs off the ib_cq. These shouldn't need any > reduction in the size of this array, and they are the common > case. > > IMO a better solution would be to change ib_process_cq_direct > to use a smaller on-stack array, and leave IB_POLL_BATCH alone.
The only reason why I added this array on-stack was to allow consumers that did not use ib_alloc_cq api to call it, but that seems like a wrong decision when thinking it over again (as probably these users did not set the wr_cqe correctly).
How about we make ib_process_cq_direct use the cq wc array and add a WARN_ON statement (and fail it gracefully) if the caller used this API without calling ib_alloc_cq?
-- diff --git a/drivers/infiniband/core/cq.c b/drivers/infiniband/core/cq.c index bc79ca8215d7..cd3e9e124834 100644 --- a/drivers/infiniband/core/cq.c +++ b/drivers/infiniband/core/cq.c @@ -25,10 +25,10 @@ #define IB_POLL_FLAGS \ (IB_CQ_NEXT_COMP | IB_CQ_REPORT_MISSED_EVENTS)
-static int __ib_process_cq(struct ib_cq *cq, int budget, struct ib_wc *poll_wc) +static int __ib_process_cq(struct ib_cq *cq, int budget) { int i, n, completed = 0; - struct ib_wc *wcs = poll_wc ? : cq->wc; + struct ib_wc *wcs = cq->wc;
/* * budget might be (-1) if the caller does not @@ -72,9 +72,9 @@ static int __ib_process_cq(struct ib_cq *cq, int budget, struct ib_wc *poll_wc) */ int ib_process_cq_direct(struct ib_cq *cq, int budget) { - struct ib_wc wcs[IB_POLL_BATCH]; - - return __ib_process_cq(cq, budget, wcs); + if (unlikely(WARN_ON_ONCE(!cq->wc))) + return 0; + return __ib_process_cq(cq, budget); } EXPORT_SYMBOL(ib_process_cq_direct);
@@ -88,7 +88,7 @@ static int ib_poll_handler(struct irq_poll *iop, int budget) struct ib_cq *cq = container_of(iop, struct ib_cq, iop); int completed;
- completed = __ib_process_cq(cq, budget, NULL); + completed = __ib_process_cq(cq, budget); if (completed < budget) { irq_poll_complete(&cq->iop); if (ib_req_notify_cq(cq, IB_POLL_FLAGS) > 0) @@ -108,7 +108,7 @@ static void ib_cq_poll_work(struct work_struct *work) struct ib_cq *cq = container_of(work, struct ib_cq, work); int completed;
- completed = __ib_process_cq(cq, IB_POLL_BUDGET_WORKQUEUE, NULL); + completed = __ib_process_cq(cq, IB_POLL_BUDGET_WORKQUEUE); if (completed >= IB_POLL_BUDGET_WORKQUEUE || ib_req_notify_cq(cq, IB_POLL_FLAGS) > 0) queue_work(ib_comp_wq, &cq->work); --
| |