lkml.org 
[lkml]   [2018]   [Apr]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v2] KVM: Extend MAX_IRQ_ROUTES to 4096 for all archs
2018-04-20 22:21 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:
> On Fri, 20 Apr 2018 21:51:13 +0800
> Wanpeng Li <kernellwp@gmail.com> wrote:
>
>> 2018-04-20 15:15 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:
>> > On Thu, 19 Apr 2018 17:47:28 -0700
>> > Wanpeng Li <kernellwp@gmail.com> wrote:
>> >
>> >> From: Wanpeng Li <wanpengli@tencent.com>
>> >>
>> >> Our virtual machines make use of device assignment by configuring
>> >> 12 NVMe disks for high I/O performance. Each NVMe device has 129
>> >> MSI-X Table entries:
>> >> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
>> >> The windows virtual machines fail to boot since they will map the number of
>> >> MSI-table entries that the NVMe hardware reported to the bus to msi routing
>> >> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
>> >> for all archs, in the future this might be extended again if needed.
>> >>
>> >> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> >> Cc: Radim Krčmář <rkrcmar@redhat.com>
>> >> Cc: Tonny Lu <tonnylu@tencent.com>
>> >> Cc: Cornelia Huck <cohuck@redhat.com>
>> >> Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
>> >> Signed-off-by: Tonny Lu <tonnylu@tencent.com>
>> >> ---
>> >> v1 -> v2:
>> >> * extend MAX_IRQ_ROUTES to 4096 for all archs
>> >>
>> >> include/linux/kvm_host.h | 6 ------
>> >> 1 file changed, 6 deletions(-)
>> >>
>> >> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>> >> index 6930c63..0a5c299 100644
>> >> --- a/include/linux/kvm_host.h
>> >> +++ b/include/linux/kvm_host.h
>> >> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
>> >>
>> >> #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
>> >>
>> >> -#ifdef CONFIG_S390
>> >> #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...
>> >
>> > What about /* might need extension/rework in the future */ instead of
>> > the FIXME?
>>
>> Yeah, I guess the maintainers can help to fix it when applying. :)
>>
>> >
>> > As far as I understand, 4096 should cover most architectures and the
>> > sane end of s390 configurations, but will not be enough at the scarier
>> > end of s390. (I'm not sure how much it matters in practice.)
>> >
>> > Do we want to make this a tuneable in the future? Do some kind of
>> > dynamic allocation? Not sure whether it is worth the trouble.
>>
>> I think keep as it is currently.
>
> My main question here is how long this is enough... the number of
> virtqueues per device is up to 1K from the initial 64, which makes it
> possible to hit the 4K limit with fewer virtio devices than before (on
> s390, each virtqueue uses a routing table entry). OTOH, we don't want
> giant tables everywhere just to accommodate s390.

I suspect there is no real scenario to futher extend for s390 since no
guys report.

> If the s390 maintainers tell me that nobody is doing the really insane
> stuff, I'm happy as well :)

Christian, any thoughts?

Regards,
Wanpeng Li

\
 
 \ /
  Last update: 2018-04-21 02:39    [W:0.122 / U:0.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site