lkml.org 
[lkml]   [2021]   [Dec]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Subjectlinux-next: manual merge of the dmaengine tree with the dmaengine-fixes tree
Date
Hi all,

Today's linux-next merge of the dmaengine tree got a conflict in:

drivers/dma/idxd/submit.c

between commit:

8affd8a4b5ce3 ("dmaengine: idxd: fix missed completion on abort path")

from the dmaengine-fixes tree and commit:

5d78abb6fbc97 ("dmaengine: idxd: rework descriptor free path on failure")

from the dmaengine tree.

I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging. You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.

diff --cc drivers/dma/idxd/submit.c
index 83452fbbb168b,569815a84e95b..0000000000000
--- a/drivers/dma/idxd/submit.c
+++ b/drivers/dma/idxd/submit.c
@@@ -134,20 -120,32 +125,43 @@@ static void llist_abort_desc(struct idx
spin_unlock(&ie->list_lock);

if (found)
- complete_desc(found, IDXD_COMPLETE_ABORT);
+ idxd_dma_complete_txd(found, IDXD_COMPLETE_ABORT, false);
+
+ /*
- * complete_desc() will return desc to allocator and the desc can be
- * acquired by a different process and the desc->list can be modified.
- * Delete desc from list so the list trasversing does not get corrupted
- * by the other process.
++ * completing the descriptor will return desc to allocator and
++ * the desc can be acquired by a different process and the
++ * desc->list can be modified. Delete desc from list so the
++ * list trasversing does not get corrupted by the other process.
+ */
+ list_for_each_entry_safe(d, t, &flist, list) {
+ list_del_init(&d->list);
- complete_desc(d, IDXD_COMPLETE_NORMAL);
++ idxd_dma_complete_txd(d, IDXD_COMPLETE_NORMAL, false);
+ }
}

+ /*
+ * ENQCMDS typically fail when the WQ is inactive or busy. On host submission, the driver
+ * has better control of number of descriptors being submitted to a shared wq by limiting
+ * the number of driver allocated descriptors to the wq size. However, when the swq is
+ * exported to a guest kernel, it may be shared with multiple guest kernels. This means
+ * the likelihood of getting busy returned on the swq when submitting goes significantly up.
+ * Having a tunable retry mechanism allows the driver to keep trying for a bit before giving
+ * up. The sysfs knob can be tuned by the system administrator.
+ */
+ int idxd_enqcmds(struct idxd_wq *wq, void __iomem *portal, const void *desc)
+ {
+ int rc, retries = 0;
+
+ do {
+ rc = enqcmds(portal, desc);
+ if (rc == 0)
+ break;
+ cpu_relax();
+ } while (retries++ < wq->enqcmds_retries);
+
+ return rc;
+ }
+
int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc)
{
struct idxd_device *idxd = wq->idxd;

\
 
 \ /
  Last update: 2021-12-14 18:25    [W:0.070 / U:0.468 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site