Messages in this thread | | | From | Michael Kelley <> | Subject | RE: [PATCH v4 1/5] swiotlb: Fix double-allocation of slots due to broken alignment handling | Date | Wed, 21 Feb 2024 23:35:44 +0000 |
| |
From: Will Deacon <will@kernel.org> Sent: Wednesday, February 21, 2024 3:35 AM > > Commit bbb73a103fbb ("swiotlb: fix a braino in the alignment check fix"), > which was a fix for commit 0eee5ae10256 ("swiotlb: fix slot alignment > checks"), causes a functional regression with vsock in a virtual machine > using bouncing via a restricted DMA SWIOTLB pool. > > When virtio allocates the virtqueues for the vsock device using > dma_alloc_coherent(), the SWIOTLB search can return page-unaligned > allocations if 'area->index' was left unaligned by a previous allocation > from the buffer: > > # Final address in brackets is the SWIOTLB address returned to the caller > | virtio-pci 0000:00:07.0: orig_addr 0x0 alloc_size 0x2000, iotlb_align_mask > 0x800 stride 0x2: got slot 1645-1649/7168 (0x98326800) > | virtio-pci 0000:00:07.0: orig_addr 0x0 alloc_size 0x2000, iotlb_align_mask > 0x800 stride 0x2: got slot 1649-1653/7168 (0x98328800) > | virtio-pci 0000:00:07.0: orig_addr 0x0 alloc_size 0x2000, iotlb_align_mask > 0x800 stride 0x2: got slot 1653-1657/7168 (0x9832a800) > > This ends badly (typically buffer corruption and/or a hang) because > swiotlb_alloc() is expecting a page-aligned allocation and so blindly > returns a pointer to the 'struct page' corresponding to the allocation, > therefore double-allocating the first half (2KiB slot) of the 4KiB page. > > Fix the problem by treating the allocation alignment separately to any > additional alignment requirements from the device, using the maximum > of the two as the stride to search the buffer slots and taking care > to ensure a minimum of page-alignment for buffers larger than a page.
Could you also add some text that this patch fixes the scenario I described in the other email thread? Something like:
The changes to page alignment handling also fix a problem when the alloc_align_mask is zero. The page alignment handling added in the two mentioned commits could force alignment to more bits in orig_addr than specified by the device's DMA min_align_mask, resulting in a larger offset. Since swiotlb_max_mapping_size() is based only on the DMA min_align_mask, that larger offset plus the requested size could exceed IO_TLB_SEGSIZE slots, and the mapping could fail when it shouldn't.
> > Fixes: bbb73a103fbb ("swiotlb: fix a braino in the alignment check fix") > Fixes: 0eee5ae10256 ("swiotlb: fix slot alignment checks") > Cc: Christoph Hellwig <hch@lst.de> > Cc: Marek Szyprowski <m.szyprowski@samsung.com> > Cc: Robin Murphy <robin.murphy@arm.com> > Cc: Dexuan Cui <decui@microsoft.com> > Reviewed-by: Petr Tesarik <petr.tesarik1@huawei-partners.com> > Signed-off-by: Will Deacon <will@kernel.org> > --- > kernel/dma/swiotlb.c | 28 +++++++++++++++------------- > 1 file changed, 15 insertions(+), 13 deletions(-) > > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c > index b079a9a8e087..2ec2cc81f1a2 100644 > --- a/kernel/dma/swiotlb.c > +++ b/kernel/dma/swiotlb.c > @@ -982,7 +982,7 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool > phys_to_dma_unencrypted(dev, pool->start) & boundary_mask; > unsigned long max_slots = get_max_slots(boundary_mask); > unsigned int iotlb_align_mask = > - dma_get_min_align_mask(dev) | alloc_align_mask; > + dma_get_min_align_mask(dev) & ~(IO_TLB_SIZE - 1); > unsigned int nslots = nr_slots(alloc_size), stride; > unsigned int offset = swiotlb_align_offset(dev, orig_addr); > unsigned int index, slots_checked, count = 0, i; > @@ -993,19 +993,18 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool > BUG_ON(!nslots); > BUG_ON(area_index >= pool->nareas); > > + /* > + * For mappings with an alignment requirement don't bother looping to > + * unaligned slots once we found an aligned one. > + */ > + stride = get_max_slots(max(alloc_align_mask, iotlb_align_mask)); > + > /* > * For allocations of PAGE_SIZE or larger only look for page aligned > * allocations. > */ > if (alloc_size >= PAGE_SIZE) > - iotlb_align_mask |= ~PAGE_MASK; > - iotlb_align_mask &= ~(IO_TLB_SIZE - 1); > - > - /* > - * For mappings with an alignment requirement don't bother looping to > - * unaligned slots once we found an aligned one. > - */ > - stride = (iotlb_align_mask >> IO_TLB_SHIFT) + 1; > + stride = umax(stride, PAGE_SHIFT - IO_TLB_SHIFT + 1);
Is this special handling of alloc_size >= PAGE_SIZE really needed? I think the comment is somewhat inaccurate. If orig_addr is non-zero, and alloc_align_mask is zero, the requirement is for the alignment to match the DMA min_align_mask bits in orig_addr, even if the allocation is larger than a page. And with Patch 3 of this series, the swiotlb_alloc() case passes in alloc_align_mask to handle page size and larger requests. So it seems like this doesn't do anything useful unless orig_addr and alloc_align_mask are both zero, and there aren't any cases of that after this patch series. If the caller wants alignment, specify it with alloc_align_mask.
> > spin_lock_irqsave(&area->lock, flags); > if (unlikely(nslots > pool->area_nslabs - area->used)) > @@ -1015,11 +1014,14 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool > index = area->index; > > for (slots_checked = 0; slots_checked < pool->area_nslabs; ) { > - slot_index = slot_base + index; > + phys_addr_t tlb_addr; > > - if (orig_addr && > - (slot_addr(tbl_dma_addr, slot_index) & > - iotlb_align_mask) != (orig_addr & iotlb_align_mask)) { > + slot_index = slot_base + index; > + tlb_addr = slot_addr(tbl_dma_addr, slot_index); > + > + if ((tlb_addr & alloc_align_mask) || > + (orig_addr && (tlb_addr & iotlb_align_mask) != > + (orig_addr & iotlb_align_mask))) {
It looks like these changes will cause a mapping failure in some iommu_dma_map_page() cases that previously didn't fail. Everything is made right by Patch 4 of your series, but from a bisect standpoint, there will be a gap where things are worse. In [1], I think Nicolin reported a crash with just this patch applied.
While the iommu_dma_map_page() case can already fail due to "too large" requests because of not setting a max mapping size, this patch can cause smaller requests to fail as well until Patch 4 gets applied. That might be problem to avoid, perhaps by merging the Patch 4 changes into this patch.
Michael
[1] https://lore.kernel.org/linux-iommu/SN6PR02MB415727E61B5295C259CCB268D4512@SN6PR02MB4157.namprd02.prod.outlook.com/T/#m0ec36324b17947adefc18b3ac715e1952150f89d
> index = wrap_area_index(pool, index + 1); > slots_checked++; > continue; > -- > 2.44.0.rc0.258.g7320e95886-goog
| |