Messages in this thread | | | Subject | Re: [PATCH V3 13/13] HV/Storvsc: Add Isolation VM support for storvsc driver | From | Tianyu Lan <> | Date | Sat, 21 Aug 2021 00:01:12 +0800 |
| |
On 8/20/2021 12:32 PM, hch@lst.de wrote: > On Thu, Aug 19, 2021 at 06:17:40PM +0000, Michael Kelley wrote: >>> +#define storvsc_dma_map(dev, page, offset, size, dir) \ >>> + dma_map_page(dev, page, offset, size, dir) >>> + >>> +#define storvsc_dma_unmap(dev, dma_range, dir) \ >>> + dma_unmap_page(dev, dma_range.dma, \ >>> + dma_range.mapping_size, \ >>> + dir ? DMA_FROM_DEVICE : DMA_TO_DEVICE) >>> + >> >> Each of these macros is used only once. IMHO, they don't >> add a lot of value. Just coding dma_map/unmap_page() >> inline would be fine and eliminate these lines of code. > > Yes, I had the same thought when looking over the code. Especially > as macros tend to further obsfucate the code (compared to actual helper > functions). > >>> + for (i = 0; i < request->hvpg_count; i++) >>> + storvsc_dma_unmap(&device->device, >>> + request->dma_range[i], >>> + request->vstor_packet.vm_srb.data_in == READ_TYPE); >> >> I think you can directly get the DMA direction as request->cmd->sc_data_direction. > > Yes. > >>> >>> @@ -1824,6 +1848,13 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd) >>> payload->range.len = length; >>> payload->range.offset = offset_in_hvpg; >>> >>> + cmd_request->dma_range = kcalloc(hvpg_count, >>> + sizeof(*cmd_request->dma_range), >>> + GFP_ATOMIC); >> >> With this patch, it appears that storvsc_queuecommand() is always >> doing bounce buffering, even when running in a non-isolated VM. >> The dma_range is always allocated, and the inner loop below does >> the dma mapping for every I/O page. The corresponding code in >> storvsc_on_channel_callback() that does the dma unmap allows for >> the dma_range to be NULL, but that never happens. > > Maybe I'm missing something in the hyperv code, but I don't think > dma_map_page would bounce buffer for the non-isolated case. It > will just return the physical address.
Yes, the swiotlb_force mode isn't enabled in non-isolated VM and so dma_page_page() returns the physical address directly.
> >>> + if (!cmd_request->dma_range) { >>> + ret = -ENOMEM; >> >> The other memory allocation failure in this function returns >> SCSI_MLQUEUE_DEVICE_BUSY. It may be debatable as to whether >> that's the best approach, but that's a topic for a different patch. I >> would suggest being consistent and using the same return code >> here. > > Independent of if SCSI_MLQUEUE_DEVICE_BUSY is good (it it a common > pattern in SCSI drivers), ->queuecommand can't return normal > negative errnos. It must return the SCSI_MLQUEUE_* codes or 0. > We should probably change the return type of the method definition > to a suitable enum to make this more clear.
Yes, will update. Thanks.
> >>> + if (offset_in_hvpg) { >>> + payload->range.offset = dma & ~HV_HYP_PAGE_MASK; >>> + offset_in_hvpg = 0; >>> + } >> >> I'm not clear on why payload->range.offset needs to be set again. >> Even after the dma mapping is done, doesn't the offset in the first >> page have to be the same? If it wasn't the same, Hyper-V wouldn't >> be able to process the PFN list correctly. In fact, couldn't the above >> code just always set offset_in_hvpg = 0? > > Careful. DMA mapping is supposed to keep the offset in the page, but > for that the DMA mapping code needs to know what the device considers a > "page". For that the driver needs to set the min_align_mask field in > struct device_dma_parameters.
The default allocate unit of swiotlb bounce is IO_TLB_SIZE(2k). Otherwise, I find some scsi request cmd's length is less than 100byte. Keep a small unit can avoid wasting bounce buffer and just need to update the offset.
> >> >> The whole approach here is to do dma remapping on each individual page >> of the I/O buffer. But wouldn't it be possible to use dma_map_sg() to map >> each scatterlist entry as a unit? Each scatterlist entry describes a range of >> physically contiguous memory. After dma_map_sg(), the resulting dma >> address must also refer to a physically contiguous range in the swiotlb >> bounce buffer memory. So at the top of the "for" loop over the scatterlist >> entries, do dma_map_sg() if we're in an isolated VM. Then compute the >> hvpfn value based on the dma address instead of sg_page(). But everything >> else is the same, and the inner loop for populating the pfn_arry is unmodified. >> Furthermore, the dma_range array that you've added is not needed, since >> scatterlist entries already have a dma_address field for saving the mapped >> address, and dma_unmap_sg() uses that field. > > Yes, I think dma_map_sg is the right thing to use here, probably even > for the non-isolated case so that we can get the hv drivers out of their > little corner and into being more like a normal kernel driver. That > is, use the scsi_dma_map/scsi_dma_unmap helpers, and then iterate over > the dma addresses one page at a time using for_each_sg_dma_page. >
I wonder whether we may introduce a new API scsi_dma_map_with_callback. Caller provides a callback and run callback in sg loop of dma_direct_map_sg(). Caller need to update some data structure in the sg loop. Here is such case that driver needs to populate payload->range.pfn_array[]. This is why I don't use dma_map_sg() here.
>> >> One thing: There's a maximum swiotlb mapping size, which I think works >> out to be 256 Kbytes. See swiotlb_max_mapping_size(). We need to make >> sure that we don't get a scatterlist entry bigger than this size. But I think >> this already happens because you set the device->dma_mask field in >> Patch 11 of this series. __scsi_init_queue checks for this setting and >> sets max_sectors to limits transfers to the max mapping size. > > Indeed. >
| |