lkml.org 
[lkml]   [2021]   [Nov]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH 03/11] PCI: pci_stub: Suppress kernel DMA ownership auto-claiming
From
On 2021-11-15 16:17, Jason Gunthorpe wrote:
> On Mon, Nov 15, 2021 at 03:14:49PM +0000, Robin Murphy wrote:
>
>>> If userspace has control of device A and can cause A to issue DMA to
>>> arbitary DMA addresses then there are certain PCI topologies where A
>>> can now issue peer to peer DMA and manipulate the MMMIO registers in
>>> device B.
>>>
>>> A kernel driver on device B is thus subjected to concurrent
>>> manipulation of the device registers from userspace.
>>>
>>> So, a 'safe' kernel driver is one that can tolerate this, and an
>>> 'unsafe' driver is one where userspace can break kernel integrity.
>>
>> You mean in the case where the kernel driver is trying to use device B in a
>> purely PIO mode, such that userspace might potentially be able to interfere
>> with data being transferred in and out of the kernel?
>
> s/PIO/MMIO, but yes basically. And not just data trasnfer but
> userspace can interfere with the device state as well.

Sure, but unexpected changes in device state could happen for any number
of reasons - uncorrected ECC error, surprise removal, etc. - so if that
can affect "kernel integrity" I'm considering it an independent problem.

>> Perhaps it's not so clear to put that under a notion of "DMA
>> ownership", since device B's DMA is irrelevant and it's really much
>> more equivalent to /dev/mem access or mmaping BARs to userspace
>> while a driver is bound.
>
> It is DMA ownership because device A's DMA is what is relevant
> here. device A's DMA compromises device B. So device A asserts it has
> USER ownership for DMA.
>
> Any device in a group with USER ownership is incompatible with a
> kernel driver.

I can see the argument from that angle, but you can equally look at it
another way and say that a device with kernel ownership is incompatible
with a kernel driver, if userspace can call write() on
"/sys/devices/B/resource0" such that device A's kernel driver DMAs all
over it. Maybe that particular example lands firmly under "just don't do
that", but I'd like to figure out where exactly we should draw the line
between "DMA" and "ability to mess with a device".

>>> The second issue is DMA - because there is only one iommu_domain
>>> underlying many devices if we give that iommu_domain to userspace it
>>> means the kernel DMA API on other devices no longer works.
>>
>> Actually, the DMA API itself via iommu-dma will "work" just fine in the
>> sense that it will still successfully perform all its operations in the
>> unattached default domain, it's just that if the driver then programs the
>> device to access the returned DMA address, the device is likely to get a
>> nasty surprise.
>
> A DMA API that returns an dma_ddr_t that does not result in data
> transfer to the specified buffers is not working, in my book - it
> breaks the API contract.
>
>>> So no kernel driver doing DMA can work at all, under any PCI topology,
>>> if userspace owns the IO page table.
>>
>> This isn't really about userspace at all - it's true of any case where a
>> kernel driver wants to attach a grouped device to its own unmanaged
>> domain.
>
> This is true for the dma api issue in isolation.

No, it's definitely a general IOMMU-API-level thing; you could just as
well have two drivers both trying to attach to their own unmanaged
domains without DMA API involvement. What I think it boils down to is
that if multiple devices in a group are bound (or want to bind) to
different drivers, we want to enforce some kind of consensus between
those drivers over IOMMU API usage.

> I think if we have a user someday it would make sense to add another
> API DMA_OWNER_DRIVER_DOMAIN that captures how the dma API doesn't work
> but DMA MMIO attacks are not possible.
>
>> The fact that the VFIO kernel driver uses its unmanaged domains to map user
>> pages upon user requests is merely a VFIO detail, and VFIO happens to be the
>> only common case where unmanaged domains and non-singleton groups intersect.
>> I'd say that, logically, if you want to put policy on mutual driver/usage
>> compatibility anywhere it should be in iommu_attach_group().
>
> It would make sense for iommu_attach_group() to require that the
> DMA_OWNERSHIP is USER or DRIVER_DOMAIN.
>
> That has a nice symmetry with iommu_attach_device() already requiring
> that the group has a single device. For a driver to use these APIs it
> must ensure security, one way or another.

iommu_attach_device() is supposed to be deprecated and eventually going
away; I wouldn't look at it too much.

> That is a good idea, but requires understanding what tegra is
> doing. Maybe tegra is that DMA_OWNER_DRIVER_DOMAIN user?
>
> I wouldn't want to see iommu_attach_group() change the DMA_OWNERSHIP,
> I think ownership is cleaner as a dedicated API. Adding a file * and
> probably the enum to iommu_attach_group() feels weird.

Indeed I wasn't imagining it changing any ownership, just preventing a
group from being attached to a non-default domain if it contains devices
bound to different incompatible drivers. Basically just taking the
existing check that VFIO tries to enforce and formalising it into the
core API. It's not too far off what we already have around changing the
default domain type, so there seems to be room for it to all fit
together quite nicely.

There would still need to be separate enforcement elsewhere to prevent
new drivers binding *after* a group *has* been attached to an unmanaged
domain, but again it can still be in those simplest terms. Tying it in
to userspace and FDs just muddies the water far more than necessary.

Robin.

> We need the dedicated API for the dma_configure op, and keeping
> ownership split from the current domain makes more sense with the
> design in the iommfd RFC.
>
> Thanks,
> Jason
>

\
 
 \ /
  Last update: 2021-11-15 21:10    [W:0.973 / U:0.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site