lkml.org 
[lkml]   [2019]   [Mar]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectVirtio-scsi multiqueue irq affinity
Hi, Christoph & all,

I noticed that starting from commit 0d9f0a52c8b9 ("virtio_scsi: use
virtio IRQ affinity", 2017-02-27) the virtio scsi driver is using a
new way (via irq_create_affinity_masks()) to automatically initialize
IRQ affinities for the multi-queues, which is different comparing to
all the other virtio devices (like virtio-net, which still uses
virtqueue_set_affinity(), which is actually, irq_set_affinity_hint()).

Firstly, it will definitely broke some of the userspace programs with
that when the scripts wanted to do the bindings explicitly like before
and they could simply fail with -EIO now every time when echoing to
/proc/irq/N/smp_affinity of any of the multi-queues (see
write_irq_affinity()).

Is there any specific reason to do it with the new way? Since AFAIU
we should still allow the system admins to decide what to do for such
configurations, .e.g., what if we only want to provision half of the
CPU resources to handle IRQs for a specific virtio-scsi controller?
We won't be able to achieve that with current policy. Or, could this
be a question for the IRQ system (irq_create_affinity_masks()) in
general? Any special considerations behind the big picture?

I believe I must have missed some contexts here and there... but I'd
like to raise the question up. Say, if the new way is preferred and
attempted, maybe it would worth it to spread the idea to the rest of
the virtio drivers who support multi-queues as well.

Thanks,

--
Peter Xu

\
 
 \ /
  Last update: 2019-03-18 07:23    [W:0.057 / U:0.672 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site