Messages in this thread Patch in this message |  | | Subject | Re: [PATCH 09/16] dma-direct: Support PCI P2PDMA pages in dma-direct map_sg | From | John Hubbard <> | Date | Sun, 2 May 2021 16:28:08 -0700 |
| |
On 4/8/21 10:01 AM, Logan Gunthorpe wrote: > Add PCI P2PDMA support for dma_direct_map_sg() so that it can map > PCI P2PDMA pages directly without a hack in the callers. This allows > for heterogeneous SGLs that contain both P2PDMA and regular pages. > > SGL segments that contain PCI bus addresses are marked with > sg_mark_pci_p2pdma() and are ignored when unmapped. > > Signed-off-by: Logan Gunthorpe <logang@deltatee.com> > --- > kernel/dma/direct.c | 25 ++++++++++++++++++++++--- > 1 file changed, 22 insertions(+), 3 deletions(-) > > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > index 002268262c9a..108dfb4ecbd5 100644 > --- a/kernel/dma/direct.c > +++ b/kernel/dma/direct.c > @@ -13,6 +13,7 @@ > #include <linux/vmalloc.h> > #include <linux/set_memory.h> > #include <linux/slab.h> > +#include <linux/pci-p2pdma.h> > #include "direct.h" > > /* > @@ -387,19 +388,37 @@ void dma_direct_unmap_sg(struct device *dev, struct scatterlist *sgl,
This routine now deserves a little bit of commenting, now that it is doing less obvious things. How about something like this:
/* * Unmaps pages, except for PCI_P2PDMA pages, which were never mapped in the * first place. Instead of unmapping PCI_P2PDMA entries, simply remove the * SG_PCI_P2PDMA mark */ void dma_direct_unmap_sg(struct device *dev, struct scatterlist *sgl, int nents, enum dma_data_direction dir, unsigned long attrs) {
> struct scatterlist *sg; > int i; > > - for_each_sg(sgl, sg, nents, i) > + for_each_sg(sgl, sg, nents, i) { > + if (sg_is_pci_p2pdma(sg)) { > + sg_unmark_pci_p2pdma(sg); > + continue; > + } > + > dma_direct_unmap_page(dev, sg->dma_address, sg_dma_len(sg), dir, > attrs); > + }
The same thing can be achieved with fewer lines and a bit more clarity. Can we please do it like this instead:
for_each_sg(sgl, sg, nents, i) { if (sg_is_pci_p2pdma(sg)) sg_unmark_pci_p2pdma(sg); else dma_direct_unmap_page(dev, sg->dma_address, sg_dma_len(sg), dir, attrs); }
> } > #endif >
Also here, a block comment for the function would be nice. How about approximately this:
/* * Maps each SG segment. Returns the number of entries mapped, or 0 upon * failure. If any entry could not be mapped, then no entries are mapped. */
I'll stop complaining about the pre-existing return code conventions, since by now you know what I was thinking of saying. :)
> int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents, > enum dma_data_direction dir, unsigned long attrs) > { > - int i; > + struct pci_p2pdma_map_state p2pdma_state = {};
Is it worth putting this stuff on the stack--is there a noticeable performance improvement from caching the state? Because if it's invisible, then simplicity is better. I suspect you're right, and that it *is* worth it, but it's good to know for real.
> struct scatterlist *sg; > + int i, ret = 0; > > for_each_sg(sgl, sg, nents, i) { > + if (is_pci_p2pdma_page(sg_page(sg))) { > + ret = pci_p2pdma_map_segment(&p2pdma_state, dev, sg, > + attrs); > + if (ret < 0) { > + goto out_unmap; > + } else if (ret) { > + ret = 0; > + continue;
Is this a bug? If neither of those "if" branches fires (ret == 0), then the code (probably unintentionally) falls through and continues on to attempt to call dma_direct_map_page()--despite being a PCI_P2PDMA page!
See below for suggestions:
> + } > + } > + > sg->dma_address = dma_direct_map_page(dev, sg_page(sg), > sg->offset, sg->length, dir, attrs); > if (sg->dma_address == DMA_MAPPING_ERROR)
This is another case in which "continue" is misleading and not as good as "else". Because unless I'm wrong above, you really only want to take one path *or* the other.
Also, the "else if (ret)" can be simplified to just setting ret = 0 unconditionally.
Given all that, here's a suggested alternative, which is both shorter and clearer, IMHO:
for_each_sg(sgl, sg, nents, i) { if (is_pci_p2pdma_page(sg_page(sg))) { ret = pci_p2pdma_map_segment(&p2pdma_state, dev, sg, attrs); if (ret < 0) goto out_unmap; else ret = 0; } else { sg->dma_address = dma_direct_map_page(dev, sg_page(sg), sg->offset, sg->length, dir, attrs); if (sg->dma_address == DMA_MAPPING_ERROR) goto out_unmap; sg_dma_len(sg) = sg->length; } }
thanks, -- John Hubbard NVIDIA
> @@ -411,7 +430,7 @@ int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents, > > out_unmap: > dma_direct_unmap_sg(dev, sgl, i, dir, attrs | DMA_ATTR_SKIP_CPU_SYNC); > - return 0; > + return ret; > } > > dma_addr_t dma_direct_map_resource(struct device *dev, phys_addr_t paddr, >
|  |