Messages in this thread | | | Date | Wed, 19 Aug 2009 18:37:06 +0300 | From | Avi Kivity <> | Subject | Re: [Alacrityvm-devel] [PATCH v3 3/6] vbus: add a "vbus-proxy" bus model for vbus_driver objects |
| |
On 08/19/2009 06:28 PM, Ira W. Snyder wrote: > >> Well, if you can't do that, you can't use virtio-pci on the host. >> You'll need another virtio transport (equivalent to "fake pci" you >> mentioned above). >> >> > Ok. > > Is there something similar that I can study as an example? Should I look > at virtio-pci? > >
There's virtio-lguest, virtio-s390, and virtio-vbus.
>> I think you tried to take two virtio-nets and make them talk together? >> That won't work. You need the code from qemu to talk to virtio-net >> config space, and vhost-net to pump the rings. >> >> > It *is* possible to make two unmodified virtio-net's talk together. I've > done it, and it is exactly what the virtio-over-PCI patch does. Study it > and you'll see how I connected the rx/tx queues together. >
Right, crossing the cables works, but feature negotiation is screwed up, and both sides think the data is in their RAM.
vhost-net doesn't do negotiation and doesn't assume the data lives in its address space.
>> Please find a name other than virtio-over-PCI since it conflicts with >> virtio-pci. You're tunnelling virtio config cycles (which are usually >> done on pci config cycles) on a new protocol which is itself tunnelled >> over PCI shared memory. >> >> > Sorry about that. Do you have suggestions for a better name? > >
virtio-$yourhardware or maybe virtio-dma
> I called it virtio-over-PCI in my previous postings to LKML, so until a > new patch is written and posted, I'll keep referring to it by the name > used in the past, so people can search for it. > > When I post virtio patches, should I CC another mailing list in addition > to LKML? >
virtualization@lists.linux-foundation.org is virtio's home.
> That said, I'm not sure how qemu-system-ppc running on x86 could > possibly communicate using virtio-net. This would mean the guest is an > emulated big-endian PPC, while the host is a little-endian x86. I > haven't actually tested this situation, so perhaps I am wrong. >
I'm confused now. You don't actually have any guest, do you, so why would you run qemu at all?
>> The x86 side only needs to run virtio-net, which is present in RHEL 5.3. >> You'd only need to run virtio-tunnel or however it's called. All the >> eventfd magic takes place on the PCI agents. >> >> > I can upgrade the kernel to anything I want on both the x86 and ppc's. > I'd like to avoid changing the x86 (RHEL5) userspace, though. On the > ppc's, I have full control over the userspace environment. >
You don't need any userspace on virtio-net's side.
Your ppc boards emulate a virtio-net device, so all you need is the virtio-net module (and virtio bindings). If you chose to emulate, say, an e1000 card all you'd need is the e1000 driver.
-- error compiling committee.c: too many arguments to function
| |