lkml.org 
[lkml]   [2022]   [Jun]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Commit 282d8998e997 (srcu: Prevent expedited GPs and blocking readers from consuming CPU) cause qemu boot slow
On 6/12/22 18:40, Paul E. McKenney wrote:
>> Do these reserved memory regions really need to be allocated separately?
>> (For example, are they really all non-contiguous? If not, that is, if
>> there are a lot of contiguous memory regions, could you sort the IORT
>> by address and do one ioctl() for each set of contiguous memory regions?)
>>
>> Are all of these reserved memory regions set up before init is spawned?
>>
>> Are all of these reserved memory regions set up while there is only a
>> single vCPU up and running?
>>
>> Is the SRCU grace period really needed in this case? (I freely confess
>> to not being all that familiar with KVM.)
>
> Oh, and there was a similar many-requests problem with networking many
> years ago. This was solved by adding a new syscall/ioctl()/whatever
> that permitted many requests to be presented to the kernel with a single
> system call.
>
> Could a new ioctl() be introduced that requested a large number
> of these memory regions in one go so as to make each call to
> synchronize_rcu_expedited() cover a useful fraction of your 9000+
> requests? Adding a few of the KVM guys on CC for their thoughts.

Unfortunately not. Apart from this specific case, in general the calls
to KVM_SET_USER_MEMORY_REGION are triggered by writes to I/O registers
in the guest, and those writes then map to a ioctl. Typically the guest
sets up a device at a time, and each setup step causes a
synchronize_srcu()---and expedited at that.

KVM has two SRCUs:

1) kvm->irq_srcu is hardly relying on the "sleepable" part; it has
readers that are very very small, but it needs extremely fast detection
of grace periods; see commit 719d93cd5f5c ("kvm/irqchip: Speed up
KVM_SET_GSI_ROUTING", 2014-05-05) which split it off kvm->srcu. Readers
are not so frequent.

2) kvm->srcu is nastier because there are readers all the time. The
read-side critical section are still short-ish, but they need the
sleepable part because they access user memory.

Writers are not frequent per se; the problem is they come in very large
bursts when a guest boots. And while the whole boot path overall can be
quadratic, O(n) expensive calls to synchronize_srcu() can have a larger
impact on runtime than the O(n^2) parts, as demonstrated here.

Therefore, we operated on the assumption that the callers of
synchronized_srcu_expedited were _anyway_ busy running CPU-bound guest
code and the desire was to get past the booting phase as fast as
possible. If the guest wants to eat host CPU it can "for(;;)" as much
as it wants; therefore, as long as expedited GPs didn't eat CPU
*throughout the whole system*, a preemptable busy wait in
synchronize_srcu_expedited() were not problematic.

This assumptions did match the SRCU code when kvm->srcu and
kvm->irq_srcu were was introduced (respectively in 2009 and 2014). But
perhaps they do not hold anymore now that each SRCU is not as
independent as it used to be in those years, and instead they use
workqueues instead?

Thanks,

Paolo

\
 
 \ /
  Last update: 2022-06-12 19:33    [W:0.186 / U:0.044 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site