lkml.org 
[lkml]   [2022]   [Sep]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH RFC 0/2] Generate device tree node for pci devices
On Wed, Sep 14, 2022 at 8:35 AM Jeremi Piotrowski
<jpiotrowski@linux.microsoft.com> wrote:
>
> On Mon, Aug 29, 2022 at 02:43:35PM -0700, Lizhi Hou wrote:
> > This patch series introduces OF overlay support for PCI devices which
> > primarily addresses two use cases. First, it provides a data driven method
> > to describe hardware peripherals that are present in a PCI endpoint and
> > hence can be accessed by the PCI host. An example device is Xilinx/AMD
> > Alveo PCIe accelerators. Second, it allows reuse of a OF compatible
> > driver -- often used in SoC platforms -- in a PCI host based system. An
> > example device is Microchip LAN9662 Ethernet Controller.
> >
> > This patch series consolidates previous efforts to define such an
> > infrastructure:
> > https://lore.kernel.org/lkml/20220305052304.726050-1-lizhi.hou@xilinx.com/
> > https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
> >
> > Normally, the PCI core discovers PCI devices and their BARs using the
> > PCI enumeration process. However, the process does not provide a way to
> > discover the hardware peripherals that are present in a PCI device, and
> > which can be accessed through the PCI BARs. Also, the enumeration process
> > does not provide a way to associate MSI-X vectors of a PCI device with the
> > hardware peripherals that are present in the device. PCI device drivers
> > often use header files to describe the hardware peripherals and their
> > resources as there is no standard data driven way to do so. This patch
> > series proposes to use flattened device tree blob to describe the
> > peripherals in a data driven way. Based on previous discussion, using
> > device tree overlay is the best way to unflatten the blob and populate
> > platform devices. To use device tree overlay, there are three obvious
> > problems that need to be resolved.
>
> Hi Lizhi,
>
> We all *love* "have you thought about xxx" questions but I would really like to
> get your thoughts on this. An approach to this problem that I have seen in
> various devices is to emulate a virtual pcie switch, and expose the "sub
> devices" behind that. That way you can carve up the BAR space, each device has
> its own config space and mapping of MSI-X vector to device becomes clear. This
> approach also integrates well with other kernel infrastructure (IOMMU, hotplug).
>
> This is certainly possible on reprogrammable devices but requires some more
> FPGA resources - though I don't believe the added utilization would be
> significant. What do you think of this kind of solution?

It would integrate easily unless the sub-devices you are targeting
have drivers already which are not PCI drivers. Sure, we could add PCI
support to them, but that could be a lot of churn.

There are also usecases where we don't get to change the h/w.

Rob

\
 
 \ /
  Last update: 2022-09-14 20:08    [W:0.214 / U:0.068 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site