Messages in this thread Patch in this message | | | From | Niklas Schnelle <> | Subject | [RFC] iommu/virtio: Use single flush queue (EXPERIMENTAL) | Date | Wed, 26 Jul 2023 13:14:33 +0200 |
| |
Just like on paged s390 guests with their virtual IOMMU, syncing mappings via virtio-iommu is quite expensive. It can thus benefit from queueing unmapped IOVAs and flushing them in batches but less so from parallel flushes which is what the shadow_on_flush flag introduced for s390 tunes dma-iommu to do.
For this to work .flush_iotlb_all is implemented. Furthermore .iotlb_sync_map is also implemented and used to pull the sync out of the mapping operation for some additional batching and performance gain.
In a basic test with NVMe pass-through to a KVM guest on a Ryzen 3900X these changes together lead to about 19% more IOPS in a fio test and slightly more bandwidth too.
Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> --- Note: The idea of using the single flush queue scheme from my series "iommu/dma: s390 DMA API conversion and optimized IOTLB flushing"[0] for virtio-iommu was already mentioned in the cover letter. I now wanted to explore this with this patch which may also serve as a test vehicle for the single flush queue scheme usable on non-s390.
Besides limited testing, this is marked experimental mainly because the use of queuing needs to be a concious decision as it allows continued access to unmapped pages for up to a second with the currently proposed single flush queue mechanism. Also it might make sense to split this patch to do the introduction and use of .iotlb_sync_map separately but as a test vehicle I found it easier to consume as a single patch.
[0]: https://lore.kernel.org/linux-iommu/20230717-dma_iommu-v11-0-a7a0b83c355c@linux.ibm.com/
drivers/iommu/virtio-iommu.c | 28 +++++++++++++++++++++++++++- 1 file changed, 27 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c index 3551ed057774..f29eb4ce2b88 100644 --- a/drivers/iommu/virtio-iommu.c +++ b/drivers/iommu/virtio-iommu.c @@ -843,7 +843,7 @@ static int viommu_map_pages(struct iommu_domain *domain, unsigned long iova, .flags = cpu_to_le32(flags), }; - ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map)); + ret = viommu_add_req(vdomain->viommu, &map, sizeof(map)); if (ret) { viommu_del_mappings(vdomain, iova, end); return ret; @@ -909,6 +909,27 @@ static void viommu_iotlb_sync(struct iommu_domain *domain, { struct viommu_domain *vdomain = to_viommu_domain(domain); + if (!vdomain->nr_endpoints) + return; + viommu_sync_req(vdomain->viommu); +} + +static int viommu_iotlb_sync_map(struct iommu_domain *domain, + unsigned long iova, size_t size) +{ + struct viommu_domain *vdomain = to_viommu_domain(domain); + + if (!vdomain->nr_endpoints) + return 0; + return viommu_sync_req(vdomain->viommu); +} + +static void viommu_flush_iotlb_all(struct iommu_domain *domain) +{ + struct viommu_domain *vdomain = to_viommu_domain(domain); + + if (!vdomain->nr_endpoints) + return; viommu_sync_req(vdomain->viommu); } @@ -991,6 +1012,7 @@ static struct iommu_device *viommu_probe_device(struct device *dev) if (ret) goto err_free_dev; } + dev->iommu->shadow_on_flush = 1; return &viommu->iommu; @@ -1037,6 +1059,8 @@ static bool viommu_capable(struct device *dev, enum iommu_cap cap) switch (cap) { case IOMMU_CAP_CACHE_COHERENCY: return true; + case IOMMU_CAP_DEFERRED_FLUSH: + return true; default: return false; } @@ -1057,7 +1081,9 @@ static struct iommu_ops viommu_ops = { .map_pages = viommu_map_pages, .unmap_pages = viommu_unmap_pages, .iova_to_phys = viommu_iova_to_phys, + .flush_iotlb_all = viommu_flush_iotlb_all, .iotlb_sync = viommu_iotlb_sync, + .iotlb_sync_map = viommu_iotlb_sync_map, .free = viommu_domain_free, } }; base-commit: 5514392fe77cd45b0d33bf239f13ba594a6759e5 -- 2.39.2
| |