lkml.org 
[lkml]   [2013]   [Dec]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
Date


> -----Original Message-----
> From: Alex Williamson [mailto:alex.williamson@redhat.com]
> Sent: Tuesday, December 10, 2013 11:23 AM
> To: Bhushan Bharat-R65777
> Cc: Wood Scott-B07421; linux-pci@vger.kernel.org; agraf@suse.de; Yoder Stuart-
> B08248; iommu@lists.linux-foundation.org; bhelgaas@google.com; linuxppc-
> dev@lists.ozlabs.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
>
> On Tue, 2013-12-10 at 05:37 +0000, Bharat.Bhushan@freescale.com wrote:
> >
> > > -----Original Message-----
> > > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > > Sent: Saturday, December 07, 2013 1:00 AM
> > > To: Wood Scott-B07421
> > > Cc: Bhushan Bharat-R65777; linux-pci@vger.kernel.org; agraf@suse.de;
> > > Yoder Stuart-B08248; iommu@lists.linux-foundation.org;
> > > bhelgaas@google.com; linuxppc- dev@lists.ozlabs.org;
> > > linux-kernel@vger.kernel.org
> > > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale
> > > IOMMU (PAMU)
> > >
> > > On Fri, 2013-12-06 at 12:59 -0600, Scott Wood wrote:
> > > > On Thu, 2013-12-05 at 22:11 -0600, Bharat Bhushan wrote:
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Wood Scott-B07421
> > > > > > Sent: Friday, December 06, 2013 5:52 AM
> > > > > > To: Bhushan Bharat-R65777
> > > > > > Cc: Alex Williamson; linux-pci@vger.kernel.org; agraf@suse.de;
> > > > > > Yoder Stuart- B08248; iommu@lists.linux-foundation.org;
> > > > > > bhelgaas@google.com; linuxppc- dev@lists.ozlabs.org;
> > > > > > linux-kernel@vger.kernel.org
> > > > > > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for
> > > > > > Freescale IOMMU (PAMU)
> > > > > >
> > > > > > On Thu, 2013-11-28 at 03:19 -0600, Bharat Bhushan wrote:
> > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: Bhushan Bharat-R65777
> > > > > > > > Sent: Wednesday, November 27, 2013 9:39 PM
> > > > > > > > To: 'Alex Williamson'
> > > > > > > > Cc: Wood Scott-B07421; linux-pci@vger.kernel.org;
> > > > > > > > agraf@suse.de; Yoder Stuart- B08248;
> > > > > > > > iommu@lists.linux-foundation.org; bhelgaas@google.com;
> > > > > > > > linuxppc- dev@lists.ozlabs.org;
> > > > > > > > linux-kernel@vger.kernel.org
> > > > > > > > Subject: RE: [PATCH 0/9 v2] vfio-pci: add support for
> > > > > > > > Freescale IOMMU (PAMU)
> > > > > > > >
> > > > > > > > If we just provide the size of MSI bank to userspace then
> > > > > > > > userspace cannot do anything wrong.
> > > > > > >
> > > > > > > So userspace does not know address, so it cannot mmap and
> > > > > > > cause any
> > > > > > interference by directly reading/writing.
> > > > > >
> > > > > > That's security through obscurity... Couldn't the malicious
> > > > > > user find out the address via other means, such as
> > > > > > experimentation on another system over which they have full
> > > > > > control? What would happen if the user reads from their
> > > > > > device's PCI config space? Or gets the information via some
> > > > > > back door in the PCI device they own? Or pokes throughout the
> > > > > > address space looking for something that
> > > generates an interrupt to its own device?
> > > > >
> > > > > So how to solve this problem, Any suggestion ?
> > > > >
> > > > > We have to map one window in PAMU for MSIs and a malicious user
> > > > > can ask its device to do DMA to MSI window region with any pair
> > > > > of address and data, which can lead to unexpected MSIs in system?
> > > >
> > > > I don't think there are any solutions other than to limit each
> > > > bank to one user, unless the admin turns some knob that says
> > > > they're OK with the partial loss of isolation.
> > >
> > > Even if the admin does opt-in to an allow_unsafe_interrupts options,
> > > it should still be reasonably difficult for one guest to interfere
> > > with the other. I don't think we want to rely on the blind luck of
> > > making the full MSI bank accessible to multiple guests and hoping they don't
> step on each other.
> >
> > Not sure how to solve in this case (sharing MSI page)
> >
> > > That probably means that vfio needs to manage the space rather than the
> guest.
> >
> > What you mean by " vfio needs to manage the space rather than the guest"?
>
> I mean there needs to be some kernel component managing the contents of the MSI
> page rather than just handing it out to the user and hoping for the best. The
> user API also needs to remain the same whether the user has the MSI page
> exclusively or it's shared with others (kernel or users). Thanks,

We have limited number of MSI banks, so we cannot provide explicit MSI bank to each VMs.
Below is the summary of msi allocation/ownership model I am thinking of:

Option-1: User-space aware of MSI banks
=========
1 ) Userspace will make GET_MSI_REGION(request number of MSI banks)
- VFIO will allocate requested number of MSI bank;
- If allocation succeed then return number of banks
- If allocation fails then check opt-in flag set by administrator (allow_unsafe_interrupts);
allow_unsafe_interrupts == 0; Not allowed to share; return FAIL (-ENODEV)
else share MSI bank of another VM.

2) Userspace will adjust geometry size as per number of banks and calls SET_GEOMETRY

3) Userspace will do DMA_MAP for its memory

4) Userspace will do MSI_MAP for number of banks it have
- MSI_MAP(iova, bank number);
- Should iova be passed by userspace or not? I think we should pass iova as it does not know if userspace will call DMA_MAP for same iova later on.
VFIO can somehow find a magic IOVA within geometry but will assume that userspace will not make DMA_MAP later on.


Option-2: Userspace transparent MSI banks
=========
1) Userspace setup geometry of its memory (say call as "userspace-geometry") (SET_GEOMETRY)
- VFIO will allocate MSI bank/s; how many??.
- Error out if not available (shared and/or exclusive, same as in option-1 above)
- VFIO will adjust geometry accordingly (say called as "actual-geometry").

2) Userspace will do DMA_MAP for its memory.
- VFIO allows only within "userspace-geometry".

3) Userspace will do MSI_MAP after all DMA_MAP complete
- VFIO will find a magic IOVA after "userspace-geometry" but within "actual-geometry".
- Allocated MSI bank/s in step-1 are mapped in IOMMU

=========

Note: Irrespective of which option we use, a malicious userspace can interfere with another userspace by programming device DMA wrongly.

Option-1 looks flexible and good to me but open for suggestions.

Thanks
-Bharat


>
> Alex
>
>

\
 
 \ /
  Last update: 2013-12-10 11:01    [W:0.057 / U:0.084 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site