lkml.org 
[lkml]   [2020]   [Apr]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v1 7/8] vfio/type1: Add VFIO_IOMMU_CACHE_INVALIDATE
From
Date
Hi Kevin,

On 4/16/20 3:28 PM, Tian, Kevin wrote:
>> From: Auger Eric <eric.auger@redhat.com>
>> Sent: Thursday, April 16, 2020 8:43 PM
>>
>> Hi Kevin,
>> On 4/16/20 2:09 PM, Tian, Kevin wrote:
>>>> From: Liu, Yi L <yi.l.liu@intel.com>
>>>> Sent: Thursday, April 16, 2020 6:40 PM
>>>>
>>>> Hi Alex,
>>>> Still have a direction question with you. Better get agreement with you
>>>> before heading forward.
>>>>
>>>>> From: Alex Williamson <alex.williamson@redhat.com>
>>>>> Sent: Friday, April 3, 2020 11:35 PM
>>>> [...]
>>>>>>>> + *
>>>>>>>> + * returns: 0 on success, -errno on failure.
>>>>>>>> + */
>>>>>>>> +struct vfio_iommu_type1_cache_invalidate {
>>>>>>>> + __u32 argsz;
>>>>>>>> + __u32 flags;
>>>>>>>> + struct iommu_cache_invalidate_info cache_info;
>>>>>>>> +};
>>>>>>>> +#define VFIO_IOMMU_CACHE_INVALIDATE _IO(VFIO_TYPE,
>>>>> VFIO_BASE
>>>>>>> + 24)
>>>>>>>
>>>>>>> The future extension capabilities of this ioctl worry me, I wonder if
>>>>>>> we should do another data[] with flag defining that data as
>>>> CACHE_INFO.
>>>>>>
>>>>>> Can you elaborate? Does it mean with this way we don't rely on iommu
>>>>>> driver to provide version_to_size conversion and instead we just pass
>>>>>> data[] to iommu driver for further audit?
>>>>>
>>>>> No, my concern is that this ioctl has a single function, strictly tied
>>>>> to the iommu uapi. If we replace cache_info with data[] then we can
>>>>> define a flag to specify that data[] is struct
>>>>> iommu_cache_invalidate_info, and if we need to, a different flag to
>>>>> identify data[] as something else. For example if we get stuck
>>>>> expanding cache_info to meet new demands and develop a new uapi to
>>>>> solve that, how would we expand this ioctl to support it rather than
>>>>> also create a new ioctl? There's also a trade-off in making the ioctl
>>>>> usage more difficult for the user. I'd still expect the vfio layer to
>>>>> check the flag and interpret data[] as indicated by the flag rather
>>>>> than just passing a blob of opaque data to the iommu layer though.
>>>>> Thanks,
>>>>
>>>> Based on your comments about defining a single ioctl and a unified
>>>> vfio structure (with a @data[] field) for pasid_alloc/free, bind/
>>>> unbind_gpasid, cache_inv. After some offline trying, I think it would
>>>> be good for bind/unbind_gpasid and cache_inv as both of them use the
>>>> iommu uapi definition. While the pasid alloc/free operation doesn't.
>>>> It would be weird to put all of them together. So pasid alloc/free
>>>> may have a separate ioctl. It would look as below. Does this direction
>>>> look good per your opinion?
>>>>
>>>> ioctl #22: VFIO_IOMMU_PASID_REQUEST
>>>> /**
>>>> * @pasid: used to return the pasid alloc result when flags ==
>> ALLOC_PASID
>>>> * specify a pasid to be freed when flags == FREE_PASID
>>>> * @range: specify the allocation range when flags == ALLOC_PASID
>>>> */
>>>> struct vfio_iommu_pasid_request {
>>>> __u32 argsz;
>>>> #define VFIO_IOMMU_ALLOC_PASID (1 << 0)
>>>> #define VFIO_IOMMU_FREE_PASID (1 << 1)
>>>> __u32 flags;
>>>> __u32 pasid;
>>>> struct {
>>>> __u32 min;
>>>> __u32 max;
>>>> } range;
>>>> };
>>>>
>>>> ioctl #23: VFIO_IOMMU_NESTING_OP
>>>> struct vfio_iommu_type1_nesting_op {
>>>> __u32 argsz;
>>>> __u32 flags;
>>>> __u32 op;
>>>> __u8 data[];
>>>> };
>>>>
>>>> /* Nesting Ops */
>>>> #define VFIO_IOMMU_NESTING_OP_BIND_PGTBL 0
>>>> #define VFIO_IOMMU_NESTING_OP_UNBIND_PGTBL 1
>>>> #define VFIO_IOMMU_NESTING_OP_CACHE_INVLD 2
>>>>
>>>
>>> Then why cannot we just put PASID into the header since the
>>> majority of nested usage is associated with a pasid?
>>>
>>> ioctl #23: VFIO_IOMMU_NESTING_OP
>>> struct vfio_iommu_type1_nesting_op {
>>> __u32 argsz;
>>> __u32 flags;
>>> __u32 op;
>>> __u32 pasid;
>>> __u8 data[];
>>> };
>>>
>>> In case of SMMUv2 which supports nested w/o PASID, this field can
>>> be ignored for that specific case.
>> On my side I would prefer keeping the pasid in the data[]. This is not
>> always used.
>>
>> For instance, in iommu_cache_invalidate_info/iommu_inv_pasid_info we
>> devised flags to tell whether the PASID is used.
>>
>
> But don't we include a PASID in both invalidate structures already?
The pasid presence is indicated by the IOMMU_INV_ADDR_FLAGS_PASID flag.

For instance for nested stage SMMUv3 I current performs an ARCHID (asid)
based invalidation only.

Eric
>
> struct iommu_inv_addr_info {
> #define IOMMU_INV_ADDR_FLAGS_PASID (1 << 0)
> #define IOMMU_INV_ADDR_FLAGS_ARCHID (1 << 1)
> #define IOMMU_INV_ADDR_FLAGS_LEAF (1 << 2)
> __u32 flags;
> __u32 archid;
> __u64 pasid;
> __u64 addr;
> __u64 granule_size;
> __u64 nb_granules;
> };
>
> struct iommu_inv_pasid_info {
> #define IOMMU_INV_PASID_FLAGS_PASID (1 << 0)
> #define IOMMU_INV_PASID_FLAGS_ARCHID (1 << 1)
> __u32 flags;
> __u32 archid;
> __u64 pasid;
> };
>
> then consolidating the pasid field into generic header doesn't
> hurt. the specific handler still rely on flags to tell whether it
> is used?
>
> Thanks
> Kevin
>

\
 
 \ /
  Last update: 2020-04-16 17:14    [W:0.102 / U:0.556 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site