Messages in this thread | | | From | Thomas Gleixner <> | Subject | Re: IRQ affinity problem from virtio_blk | Date | Wed, 16 Nov 2022 14:06:00 +0100 |
| |
On Wed, Nov 16 2022 at 19:35, Ming Lei wrote: > On Wed, Nov 16, 2022 at 11:43:24AM +0100, Thomas Gleixner wrote: >> > Let's say we have 20 queues - then just 10 devices will exhaust the >> > vector space right? >> >> No. >> >> If you have 20 queues then the queues are spread out over the >> CPUs. Assume 80 CPUs: >> >> Then each queue is associated to 80/20 = 4 CPUs and the resulting >> affinity mask of each queue contains exactly 4 CPUs: >> >> q0: 0 - 3 >> q1: 4 - 7 >> ... >> q19: 76 - 79 >> >> So this puts exactly 80 vectors aside, one per CPU. >> >> As long as at least one CPU of a queue mask is online the queue is >> enabled. If the last CPU of a queue mask goes offline then the queue is >> shutdown which means the interrupt associated to the queue is shut down >> too. That's all handled by the block MQ and the interrupt core. If a CPU >> of a queue mask comes back online then the guaranteed vector is >> allocated again. >> >> So it does not matter how many queues per device you have it will >> reserve exactly ONE interrupt per CPU. >> >> Ergo you need 200 devices to exhaust the vector space. > > I am wondering why one interrupt needs to be reserved for each CPU, in > theory one queue needs one irq, I understand, so would you mind > explaining the story a bit?
It's only one interrupt per queue. Interrupt != vector.
The guarantee of managed interrupts always was that if there are less queues than CPUs that CPU hotunplug cannot result in vector exhaustion.
Therefore we differentiate between managed and non-managed interrupts. Managed have a guaranteed reservation, non-managed do not.
That's been a very deliberate design decision from the very beginning.
Thanks,
tglx
| |