Messages in this thread Patch in this message | | | Date | Mon, 19 Feb 2024 13:41:41 +0100 | From | Petr Tesařík <> | Subject | Re: [PATCH v3 3/3] swiotlb: Honour dma_alloc_coherent() alignment in swiotlb_alloc() |
| |
On Mon, 5 Feb 2024 19:01:27 +0000 Will Deacon <will@kernel.org> wrote:
> core-api/dma-api-howto.rst states the following properties of > dma_alloc_coherent(): > > | The CPU virtual address and the DMA address are both guaranteed to > | be aligned to the smallest PAGE_SIZE order which is greater than or > | equal to the requested size. > > However, swiotlb_alloc() passes zero for the 'alloc_align_mask' > parameter of swiotlb_find_slots() and so this property is not upheld. > Instead, allocations larger than a page are aligned to PAGE_SIZE, > > Calculate the mask corresponding to the page order suitable for holding > the allocation and pass that to swiotlb_find_slots(). > > Cc: Christoph Hellwig <hch@lst.de> > Cc: Marek Szyprowski <m.szyprowski@samsung.com> > Cc: Robin Murphy <robin.murphy@arm.com> > Cc: Petr Tesarik <petr.tesarik1@huawei-partners.com> > Cc: Dexuan Cui <decui@microsoft.com> > Fixes: e81e99bacc9f ("swiotlb: Support aligned swiotlb buffers") > Signed-off-by: Will Deacon <will@kernel.org>
Reviewed-by: Petr Tesarik <petr.tesarik1@huawei-partners.com>
Petr T
> --- > kernel/dma/swiotlb.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c > index adbb3143238b..283eea33dd22 100644 > --- a/kernel/dma/swiotlb.c > +++ b/kernel/dma/swiotlb.c > @@ -1633,12 +1633,14 @@ struct page *swiotlb_alloc(struct device *dev, size_t size) > struct io_tlb_mem *mem = dev->dma_io_tlb_mem; > struct io_tlb_pool *pool; > phys_addr_t tlb_addr; > + unsigned int align; > int index; > > if (!mem) > return NULL; > > - index = swiotlb_find_slots(dev, 0, size, 0, &pool); > + align = (1 << (get_order(size) + PAGE_SHIFT)) - 1; > + index = swiotlb_find_slots(dev, 0, size, align, &pool); > if (index == -1) > return NULL; >
| |