lkml.org 
[lkml]   [2022]   [Nov]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: IRQ affinity problem from virtio_blk
Date


> -----Original Message-----
> From: Thomas Gleixner <tglx@linutronix.de>
> Sent: Wednesday, November 16, 2022 6:56 PM
> To: Angus Chen <angus.chen@jaguarmicro.com>; Michael S. Tsirkin
> <mst@redhat.com>
> Cc: linux-kernel@vger.kernel.org; Ming Lei <ming.lei@redhat.com>; Jason
> Wang <jasowang@redhat.com>
> Subject: RE: IRQ affinity problem from virtio_blk
>
> On Wed, Nov 16 2022 at 01:02, Angus Chen wrote:
> >> On Wed, Nov 16, 2022 at 12:24:24AM +0100, Thomas Gleixner wrote:
> > Any other information I need to provide,pls tell me.
>
> A sensible use case for 180+ virtio block devices in a single guest.
>
Our card can provide more than 512 virtio_blk devices .
one virtio_blk passthrough to one container,like docker.
So we need so much devices.
In the first patch ,I del the IRQD_AFFINITY_MANAGED in virtio_blk .

As you know, if we just use small queues number ,like 1or 2,we
Still occupy 80 vector ,that is kind of waste,and it is easy to eahausted the
Irq resource.

IRQD_AFFINITY_MANAGED is not the problem,
but many devices use the IRQD_AFFINITY_MANAGED will be problem.

Thanks.

> Thanks,
>
> tglx
\
 
 \ /
  Last update: 2022-11-16 12:35    [W:0.066 / U:0.416 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site