lkml.org 
[lkml]   [2022]   [Nov]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: IRQ affinity problem from virtio_blk
Date
On Wed, Nov 16 2022 at 00:46, Angus Chen wrote:
>> On Wed, Nov 16 2022 at 00:04, Thomas Gleixner wrote:
>> >>> But then it also has another 79 vectors put aside for the other queues,
> en,it not the truth,in fact ,I just has one queue for one virtio_blk.

Which does not matter. See my reply to Michael. It's ONE vector per CPU
and block device.

> Nov 14 11:48:45 localhost kernel: virtio_blk virtio181: 1/0/0 default/read/poll queues
> Nov 14 11:48:45 localhost kernel: virtio_blk virtio181: [vdpr] 20480 512-byte logical blocks (10.5 MB/10.0 MiB)
> Nov 14 11:48:46 localhost kernel: virtio-pci 0000:37:16.4: enabling device (0000 -> 0002)
> Nov 14 11:48:46 localhost kernel: virtio-pci 0000:37:16.4: virtio_pci: leaving for legacy driver
> Nov 14 11:48:46 localhost kernel: virtio_blk virtio182: 1/0/0 default/read/poll queues---------the virtio182 means index 182.
> Nov 14 11:48:46 localhost kernel: vp_find_vqs_msix return err=-28-----------------------------the first time we get 'no space' error from irq subsystem.

That's close to 200 virtio devices and the vector space is exhausted.
Works as expected.

Interrupt vectors are a limited resource on x86 and not only on x86. Not
any different from any other resource.

Thanks,

tglx







\
 
 \ /
  Last update: 2022-11-16 12:09    [W:0.600 / U:0.452 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site