lkml.org 
[lkml]   [2020]   [Jul]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [PATCH v5 11/15] vfio/type1: Allow invalidating first-level/stage IOMMU cache
Date
Hi Eric,

> From: Auger Eric <eric.auger@redhat.com>
> Sent: Monday, July 20, 2020 5:42 PM
>
> Yi,
>
> On 7/12/20 1:21 PM, Liu Yi L wrote:
> > This patch provides an interface allowing the userspace to invalidate
> > IOMMU cache for first-level page table. It is required when the first
> > level IOMMU page table is not managed by the host kernel in the nested
> > translation setup.
> >
> > Cc: Kevin Tian <kevin.tian@intel.com>
> > CC: Jacob Pan <jacob.jun.pan@linux.intel.com>
> > Cc: Alex Williamson <alex.williamson@redhat.com>
> > Cc: Eric Auger <eric.auger@redhat.com>
> > Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
> > Cc: Joerg Roedel <joro@8bytes.org>
> > Cc: Lu Baolu <baolu.lu@linux.intel.com>
> > Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
> > Signed-off-by: Eric Auger <eric.auger@redhat.com>
> > Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
> > ---
> > v1 -> v2:
> > *) rename from "vfio/type1: Flush stage-1 IOMMU cache for nesting type"
> > *) rename vfio_cache_inv_fn() to vfio_dev_cache_invalidate_fn()
> > *) vfio_dev_cache_inv_fn() always successful
> > *) remove VFIO_IOMMU_CACHE_INVALIDATE, and reuse
> VFIO_IOMMU_NESTING_OP
> > ---
> > drivers/vfio/vfio_iommu_type1.c | 50
> +++++++++++++++++++++++++++++++++++++++++
> > include/uapi/linux/vfio.h | 3 +++
> > 2 files changed, 53 insertions(+)
> >
> > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> > index f0f21ff..960cc59 100644
> > --- a/drivers/vfio/vfio_iommu_type1.c
> > +++ b/drivers/vfio/vfio_iommu_type1.c
> > @@ -3073,6 +3073,53 @@ static long vfio_iommu_handle_pgtbl_op(struct
> vfio_iommu *iommu,
> > return ret;
> > }
> >
> > +static int vfio_dev_cache_invalidate_fn(struct device *dev, void *data)
> > +{
> > + struct domain_capsule *dc = (struct domain_capsule *)data;
> > + unsigned long arg = *(unsigned long *)dc->data;
> > +
> > + iommu_cache_invalidate(dc->domain, dev, (void __user *)arg);
> > + return 0;
> > +}
> > +
> > +static long vfio_iommu_invalidate_cache(struct vfio_iommu *iommu,
> > + unsigned long arg)
> > +{
> > + struct domain_capsule dc = { .data = &arg };
> > + struct vfio_group *group;
> > + struct vfio_domain *domain;
> > + int ret = 0;
> > + struct iommu_nesting_info *info;
> > +
> > + mutex_lock(&iommu->lock);
> > + /*
> > + * Cache invalidation is required for any nesting IOMMU,
> > + * so no need to check system-wide PASID support.
> > + */
> > + info = iommu->nesting_info;
> > + if (!info || !(info->features & IOMMU_NESTING_FEAT_CACHE_INVLD)) {
> > + ret = -EOPNOTSUPP;
> > + goto out_unlock;
> > + }
> > +
> > + group = vfio_find_nesting_group(iommu);
> so I see you reuse it here. But still wondering if you cant't directly
> set dc.domain and dc.group group below using list_firt_entry?

I guess yes for current implementation. I also considered if I can
get a helper function to retrun a dc with group and domain field
initialized as it is common code used by both bind/unbind and cache_inv
path. perhaps something like get_domain_capsule_for_nesting()

> > + if (!group) {
> > + ret = -EINVAL;
> > + goto out_unlock;
> > + }
> > +
> > + domain = list_first_entry(&iommu->domain_list,
> > + struct vfio_domain, next);
> > + dc.group = group;
> > + dc.domain = domain->domain;
> > + iommu_group_for_each_dev(group->iommu_group, &dc,
> > + vfio_dev_cache_invalidate_fn);
> > +
> > +out_unlock:
> > + mutex_unlock(&iommu->lock);
> > + return ret;
> > +}
> > +
> > static long vfio_iommu_type1_nesting_op(struct vfio_iommu *iommu,
> > unsigned long arg)
> > {
> > @@ -3095,6 +3142,9 @@ static long vfio_iommu_type1_nesting_op(struct
> vfio_iommu *iommu,
> > case VFIO_IOMMU_NESTING_OP_UNBIND_PGTBL:
> > ret = vfio_iommu_handle_pgtbl_op(iommu, false, arg + minsz);
> > break;
> > + case VFIO_IOMMU_NESTING_OP_CACHE_INVLD:
> > + ret = vfio_iommu_invalidate_cache(iommu, arg + minsz);
> > + break;
> > default:
> > ret = -EINVAL;
> > }
> > diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> > index a8ad786..845a5800 100644
> > --- a/include/uapi/linux/vfio.h
> > +++ b/include/uapi/linux/vfio.h
> > @@ -1225,6 +1225,8 @@ struct vfio_iommu_type1_pasid_request {
> > * +-----------------+-----------------------------------------------+
> > * | UNBIND_PGTBL | struct iommu_gpasid_bind_data |
> > * +-----------------+-----------------------------------------------+
> > + * | CACHE_INVLD | struct iommu_cache_invalidate_info |
> > + * +-----------------+-----------------------------------------------+
> > *
> > * returns: 0 on success, -errno on failure.
> > */
> > @@ -1237,6 +1239,7 @@ struct vfio_iommu_type1_nesting_op {
> >
> > #define VFIO_IOMMU_NESTING_OP_BIND_PGTBL (0)
> > #define VFIO_IOMMU_NESTING_OP_UNBIND_PGTBL (1)
> > +#define VFIO_IOMMU_NESTING_OP_CACHE_INVLD (2)
> >
> > #define VFIO_IOMMU_NESTING_OP _IO(VFIO_TYPE, VFIO_BASE + 19)
> >
> >
> Otherwise looks good to me

thanks,

Regards,
Yi Liu

> Thanks
>
> Eric

\
 
 \ /
  Last update: 2020-07-20 12:42    [W:0.117 / U:1.324 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site