lkml.org 
[lkml]   [2020]   [Oct]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH V3 2/4] misc: vop: do not allocate and reassign the used ring
(resending from the kernel.org address after getting bounces again)

On Wed, Oct 28, 2020 at 7:29 AM Sherry Sun <sherry.sun@nxp.com> wrote:
> > Subject: Re: [PATCH V3 2/4] misc: vop: do not allocate and reassign the used
> >
> > Both Ashutosh and I have moved on to other projects. The MIC devices have
> > been discontinued. I have just sent across a patch to remove the MIC drivers
> > from the kernel tree.
> >
> > We are very glad to see that Sherry is able to reuse some of the VOP logic
> > and it is working well. It is best if the MIC drivers are removed so Sherry can
> > add the specific VOP logic required for imx8qm subsequently without having
> > to worry about other driver dependencies.
> > Hoping this results in a cleaner imx8qm driver moving forward.
>
> I'm ok with your patch.
> Since you have deprecated the MIC related code, may I ask do you have
> a better solution instead of vop/scif?

I think we should try to do something on top of the PCIe endpoint subsystem
to make it work across arbitrary combinations of host and device
implementations,
and provide a superset of what the MIC driver, (out-of-tree) Bluefield endpoint
driver, and the NTB subsystem as well as a couple of others used to do,
each of them tunneling block/network/serial/... over a PCIe link of some
sort, usually with virtio.

At the moment, there is only one driver for the endpoint framework in the
kernel, in drivers/pci/endpoint/functions/pci-epf-test.c, but I think this can
serve as a starting point.

The PCI endpoint subsystem already uses configfs for configuring the
available devices, and this seems like a good fit for making it work
in general. However, there are a number of use cases that have
somewhat conflicting requirements, so the first step would be to
figure out what everyone actually needs for virtio communication.

These are some of the main differences that I have noticed in the
past:

- The simple case would be to use one PCIe endpoint device
for each virtio device, but I think this needs to be multiplexed
so that hardware that only supports a single PCIe endpoint
can still have multiple virtio devices tunneled through it.

- While sometimes the configuration is hardcoded in the driver, ideally
the type of virtio device(s) that is tunneled over the PCIe link should
be configurable. The configuration of the endpoint device itself is
done on the machine running on the endpoint side, but for the
virtio devices, this might be either on the host or the endpoint.
Not sure if one of the two ways is common enough, or we have to
allow both.

- When the link is configured, you still need one side to provide a
virtio device host implementation, while the other side would
run the normal virtio device driver. Again, these could be done
either way, and it is independent of which side has configured
the link, and we might want to only allow one of the two options,
or do both, or tie it to who configures it (e.g. the side that creates
the device must be the virtio device host, while the other side
just sees the device pop up and uses a virtio driver).

Arnd

\
 
 \ /
  Last update: 2020-10-28 22:55    [W:0.541 / U:0.212 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site