Messages in this thread |  | | Subject | Re: [PATCH AUTOSEL for 4.14 039/161] IB/cq: Don't force IB_POLL_DIRECT poll context for ib_process_cq_direct | From | Max Gurtovoy <> | Date | Mon, 9 Apr 2018 19:21:29 +0300 |
| |
Hi Sasha, please consider taking a small fix for this one (also useful for 4.15):
commit d3b9e8ad425cfd5b9116732e057f1b48e4d3bcb8 Author: Max Gurtovoy <maxg@mellanox.com> Date: Mon Mar 5 20:09:48 2018 +0200
RDMA/core: Reduce poll batch for direct cq polling
Fix warning limit for kernel stack consumption:
drivers/infiniband/core/cq.c: In function 'ib_process_cq_direct': drivers/infiniband/core/cq.c:78:1: error: the frame size of 1032 bytes is larger than 1024 bytes [-Werror=frame-larger-than=]
Using smaller ib_wc array on the stack brings us comfortably below that limit again.
Fixes: 246d8b184c10 ("IB/cq: Don't force IB_POLL_DIRECT poll context for ib_process_cq_direct") Reported-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Sergey Gorenko <sergeygo@mellanox.com> Signed-off-by: Max Gurtovoy <maxg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Reviewed-by: Bart Van Assche <bart.vanassche@wdc.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-Max.
On 4/9/2018 3:20 AM, Sasha Levin wrote: > From: Sagi Grimberg <sagi@grimberg.me> > > [ Upstream commit 246d8b184c100e8eb6b4e8c88f232c2ed2a4e672 ] > > polling the completion queue directly does not interfere > with the existing polling logic, hence drop the requirement. > Be aware that running ib_process_cq_direct with non IB_POLL_DIRECT > CQ may trigger concurrent CQ processing. > > This can be used for polling mode ULPs. > > Cc: Bart Van Assche <bart.vanassche@wdc.com> > Reported-by: Steve Wise <swise@opengridcomputing.com> > Signed-off-by: Sagi Grimberg <sagi@grimberg.me> > [maxg: added wcs array argument to __ib_process_cq] > Signed-off-by: Max Gurtovoy <maxg@mellanox.com> > Signed-off-by: Doug Ledford <dledford@redhat.com> > Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> > --- > drivers/infiniband/core/cq.c | 23 +++++++++++++---------- > 1 file changed, 13 insertions(+), 10 deletions(-) > > diff --git a/drivers/infiniband/core/cq.c b/drivers/infiniband/core/cq.c > index f2ae75fa3128..c8c5a5a7f433 100644 > --- a/drivers/infiniband/core/cq.c > +++ b/drivers/infiniband/core/cq.c > @@ -25,9 +25,10 @@ > #define IB_POLL_FLAGS \ > (IB_CQ_NEXT_COMP | IB_CQ_REPORT_MISSED_EVENTS) > > -static int __ib_process_cq(struct ib_cq *cq, int budget) > +static int __ib_process_cq(struct ib_cq *cq, int budget, struct ib_wc *poll_wc) > { > int i, n, completed = 0; > + struct ib_wc *wcs = poll_wc ? : cq->wc; > > /* > * budget might be (-1) if the caller does not > @@ -35,9 +36,9 @@ static int __ib_process_cq(struct ib_cq *cq, int budget) > * minimum here. > */ > while ((n = ib_poll_cq(cq, min_t(u32, IB_POLL_BATCH, > - budget - completed), cq->wc)) > 0) { > + budget - completed), wcs)) > 0) { > for (i = 0; i < n; i++) { > - struct ib_wc *wc = &cq->wc[i]; > + struct ib_wc *wc = &wcs[i]; > > if (wc->wr_cqe) > wc->wr_cqe->done(cq, wc); > @@ -60,18 +61,20 @@ static int __ib_process_cq(struct ib_cq *cq, int budget) > * @cq: CQ to process > * @budget: number of CQEs to poll for > * > - * This function is used to process all outstanding CQ entries on a > - * %IB_POLL_DIRECT CQ. It does not offload CQ processing to a different > - * context and does not ask for completion interrupts from the HCA. > + * This function is used to process all outstanding CQ entries. > + * It does not offload CQ processing to a different context and does > + * not ask for completion interrupts from the HCA. > + * Using direct processing on CQ with non IB_POLL_DIRECT type may trigger > + * concurrent processing. > * > * Note: do not pass -1 as %budget unless it is guaranteed that the number > * of completions that will be processed is small. > */ > int ib_process_cq_direct(struct ib_cq *cq, int budget) > { > - WARN_ON_ONCE(cq->poll_ctx != IB_POLL_DIRECT); > + struct ib_wc wcs[IB_POLL_BATCH]; > > - return __ib_process_cq(cq, budget); > + return __ib_process_cq(cq, budget, wcs); > } > EXPORT_SYMBOL(ib_process_cq_direct); > > @@ -85,7 +88,7 @@ static int ib_poll_handler(struct irq_poll *iop, int budget) > struct ib_cq *cq = container_of(iop, struct ib_cq, iop); > int completed; > > - completed = __ib_process_cq(cq, budget); > + completed = __ib_process_cq(cq, budget, NULL); > if (completed < budget) { > irq_poll_complete(&cq->iop); > if (ib_req_notify_cq(cq, IB_POLL_FLAGS) > 0) > @@ -105,7 +108,7 @@ static void ib_cq_poll_work(struct work_struct *work) > struct ib_cq *cq = container_of(work, struct ib_cq, work); > int completed; > > - completed = __ib_process_cq(cq, IB_POLL_BUDGET_WORKQUEUE); > + completed = __ib_process_cq(cq, IB_POLL_BUDGET_WORKQUEUE, NULL); > if (completed >= IB_POLL_BUDGET_WORKQUEUE || > ib_req_notify_cq(cq, IB_POLL_FLAGS) > 0) > queue_work(ib_comp_wq, &cq->work); >
|  |