Messages in this thread | | | From | Angus Chen <> | Subject | RE: IRQ affinity problem from virtio_blk | Date | Wed, 16 Nov 2022 11:24:23 +0000 |
| |
> -----Original Message----- > From: Thomas Gleixner <tglx@linutronix.de> > Sent: Wednesday, November 16, 2022 6:56 PM > To: Angus Chen <angus.chen@jaguarmicro.com>; Michael S. Tsirkin > <mst@redhat.com> > Cc: linux-kernel@vger.kernel.org; Ming Lei <ming.lei@redhat.com>; Jason > Wang <jasowang@redhat.com> > Subject: RE: IRQ affinity problem from virtio_blk > > On Wed, Nov 16 2022 at 01:02, Angus Chen wrote: > >> On Wed, Nov 16, 2022 at 12:24:24AM +0100, Thomas Gleixner wrote: > > Any other information I need to provide,pls tell me. > > A sensible use case for 180+ virtio block devices in a single guest. > Our card can provide more than 512 virtio_blk devices . one virtio_blk passthrough to one container,like docker. So we need so much devices. In the first patch ,I del the IRQD_AFFINITY_MANAGED in virtio_blk .
As you know, if we just use small queues number ,like 1or 2,we Still occupy 80 vector ,that is kind of waste,and it is easy to eahausted the Irq resource.
IRQD_AFFINITY_MANAGED is not the problem, but many devices use the IRQD_AFFINITY_MANAGED will be problem.
Thanks.
> Thanks, > > tglx
| |