[lkml]   [2020]   [Jul]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
Patch in this message
Subject[PATCH v5 14/15] vfio: Document dual stage control
From: Eric Auger <>

The VFIO API was enhanced to support nested stage control: a bunch of
new iotcls and usage guideline.

Let's document the process to follow to set up nested mode.

Cc: Kevin Tian <>
CC: Jacob Pan <>
Cc: Alex Williamson <>
Cc: Eric Auger <>
Cc: Jean-Philippe Brucker <>
Cc: Joerg Roedel <>
Cc: Lu Baolu <>
Reviewed-by: Stefan Hajnoczi <>
Signed-off-by: Eric Auger <>
Signed-off-by: Liu Yi L <>
v3 -> v4:
*) add review-by from Stefan Hajnoczi

v2 -> v3:
*) address comments from Stefan Hajnoczi

v1 -> v2:
*) new in v2, compared with Eric's original version, pasid table bind
and fault reporting is removed as this series doesn't cover them.
Original version from Eric.
Documentation/driver-api/vfio.rst | 67 +++++++++++++++++++++++++++++++++++++++
1 file changed, 67 insertions(+)

diff --git a/Documentation/driver-api/vfio.rst b/Documentation/driver-api/vfio.rst
index f1a4d3c..0672c45 100644
--- a/Documentation/driver-api/vfio.rst
+++ b/Documentation/driver-api/vfio.rst
@@ -239,6 +239,73 @@ group and can access them as follows::
/* Gratuitous device reset and go... */
ioctl(device, VFIO_DEVICE_RESET);

+IOMMU Dual Stage Control
+Some IOMMUs support 2 stages/levels of translation. Stage corresponds to
+the ARM terminology while level corresponds to Intel's VTD terminology.
+In the following text we use either without distinction.
+This is useful when the guest is exposed with a virtual IOMMU and some
+devices are assigned to the guest through VFIO. Then the guest OS can use
+stage 1 (GIOVA -> GPA or GVA->GPA), while the hypervisor uses stage 2 for
+VM isolation (GPA -> HPA).
+Under dual stage translation, the guest gets ownership of the stage 1 page
+tables and also owns stage 1 configuration structures. The hypervisor owns
+the root configuration structure (for security reason), including stage 2
+configuration. This works as long as configuration structures and page table
+formats are compatible between the virtual IOMMU and the physical IOMMU.
+Assuming the HW supports it, this nested mode is selected by choosing the
+This forces the hypervisor to use the stage 2, leaving stage 1 available
+for guest usage. The guest stage 1 format depends on IOMMU vendor, and
+it is the same with the nesting configuration method. User space should
+check the format and configuration method after setting nesting type by
+ ioctl(container->fd, VFIO_IOMMU_GET_INFO, &nesting_info);
+Details can be found in Documentation/userspace-api/iommu.rst. For Intel
+VT-d, each stage 1 page table is bound to host by:
+ nesting_op->flags = VFIO_IOMMU_NESTING_OP_BIND_PGTBL;
+ memcpy(&nesting_op->data, &bind_data, sizeof(bind_data));
+ ioctl(container->fd, VFIO_IOMMU_NESTING_OP, nesting_op);
+As mentioned above, guest OS may use stage 1 for GIOVA->GPA or GVA->GPA.
+GVA->GPA page tables are available when PASID (Process Address Space ID)
+is exposed to guest. e.g. guest with PASID-capable devices assigned. For
+such page table binding, the bind_data should include PASID info, which
+is allocated by guest itself or by host. This depends on hardware vendor.
+e.g. Intel VT-d requires to allocate PASID from host. This requirement is
+defined by the Virtual Command Support in VT-d 3.0 spec, guest software
+running on VT-d should allocate PASID from host kernel. To allocate PASID
+from host, user space should check the IOMMU_NESTING_FEAT_SYSWIDE_PASID
+bit of the nesting info reported from host kernel. VFIO reports the nesting
+info by VFIO_IOMMU_GET_INFO. User space could allocate PASID from host by:
+ ioctl(container, VFIO_IOMMU_PASID_REQUEST, &req);
+With first stage/level page table bound to host, it allows to combine the
+guest stage 1 translation along with the hypervisor stage 2 translation to
+get final address.
+When the guest invalidates stage 1 related caches, invalidations must be
+forwarded to the host through
+ nesting_op->flags = VFIO_IOMMU_NESTING_OP_CACHE_INVLD;
+ memcpy(&nesting_op->data, &inv_data, sizeof(inv_data));
+ ioctl(container->fd, VFIO_IOMMU_NESTING_OP, nesting_op);
+Those invalidations can happen at various granularity levels, page, context,

 \ /
  Last update: 2020-07-12 13:15    [W:0.436 / U:0.488 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site