lkml.org 
[lkml]   [2021]   [Dec]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v4 07/13] iommu: Add iommu_at[de]tach_device_shared() for multi-device groups
On Thu, Dec 23, 2021 at 01:53:24PM +0800, Lu Baolu wrote:

> > This series is going in the direction of eliminating
> > iommu_attach_group() as part of the driver
> > interface. iommu_attach_group() is repurposed to only be useful for
> > VFIO.
>
> We can also remove iommu_attach_group() in VFIO because it is
> essentially equivalent to
>
> iommu_group_for_each_dev(group, iommu_attach_device(dev))

Trying to do this would be subtly buggy, remeber the group list is
dynamic so when it is time to detatch this won't reliably balance.

It is the same problem with randomly picking a device inside the group
as the groups 'handle'. There is no guarentee that will work. Only
devices from a driver should be used with the device API.

> > As for why does DMA_OWNER_PRIVATE_DOMAIN_USER exist? VFIO doesn't have
> > an iommu_domain at this point but it still needs the iommu core to
> > detatch the default domain. This is what the _USER does.
>
> There is also a contract that after the USER ownership is claimed the
> device could be accessed by userspace through the MMIO registers. So,
> a device could be accessible by userspace before a user-space I/O
> address is attached.

If we had an IOMMU domain we could solve this by just assigning the
correct domain. The core issue that motivates USER is the lack of an
iommu_domain.


> > I think it would be clear why iommu_group_set_dma_owner(), which
> > actually does detatch, is not the same thing as iommu_attach_device().
>
> iommu_device_set_dma_owner() will eventually call
> iommu_group_set_dma_owner(). I didn't get why
> iommu_group_set_dma_owner() is special and need to keep.

Not quite, they would not call each other, they have different
implementations:

int iommu_device_use_dma_api(struct device *device)
{
struct iommu_group *group = device->iommu_group;

if (!group)
return 0;

mutex_lock(&group->mutex);
if (group->owner_cnt != 0 ||
group->domain != group->default_domain) {
mutex_unlock(&group->mutex);
return -EBUSY;
}
group->owner_cnt = 1;
group->owner = NULL;
mutex_unlock(&group->mutex);
return 0;
}

int iommu_group_set_dma_owner(struct iommu_group *group, struct file *owner)
{
mutex_lock(&group->mutex);
if (group->owner_cnt != 0) {
if (group->owner != owner)
goto err_unlock;
group->owner_cnt++;
mutex_unlock(&group->mutex);
return 0;
}
if (group->domain && group->domain != group->default_domain)
goto err_unlock;

__iommu_detach_group(group->domain, group);
group->owner_cnt = 1;
group->owner = owner;
mutex_unlock(&group->mutex);
return 0;

err_unlock;
mutex_unlock(&group->mutex);
return -EBUSY;
}

It is the same as how we ended up putting the refcounting logic
directly into the iommu_attach_device().

See, we get rid of the enum as a multiplexor parameter, each API does
only wnat it needs, they don't call each other.

We don't need _USER anymore because iommu_group_set_dma_owner() always
does detatch, and iommu_replace_group_domain() avoids ever reassigning
default_domain. The sepecial USER behavior falls out automatically.

Jason

\
 
 \ /
  Last update: 2021-12-23 15:05    [W:0.096 / U:1.060 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site