Messages in this thread | | | Date | Sun, 21 Apr 2013 22:09:29 +0800 | From | Xiao Guangrong <> | Subject | Re: [PATCH v3 00/15] KVM: MMU: fast zap all shadow pages |
| |
On 04/21/2013 09:03 PM, Gleb Natapov wrote: > On Tue, Apr 16, 2013 at 02:32:38PM +0800, Xiao Guangrong wrote: >> This patchset is based on my previous two patchset: >> [PATCH 0/2] KVM: x86: avoid potential soft lockup and unneeded mmu reload >> (https://lkml.org/lkml/2013/4/1/2) >> >> [PATCH v2 0/6] KVM: MMU: fast invalid all mmio sptes >> (https://lkml.org/lkml/2013/4/1/134) >> >> Changlog: >> V3: >> completely redesign the algorithm, please see below. >> > This looks pretty complicated. Is it still needed in order to avoid soft > lockups after "avoid potential soft lockup and unneeded mmu reload" patch?
Yes.
I discussed this point with Marcelo:
====== BTW, to my honest, i do not think spin_needbreak is a good way - it does not fix the hot-lock contention and it just occupies more cpu time to avoid possible soft lock-ups.
Especially, zap-all-shadow-pages can let other vcpus fault and vcpus contest mmu-lock, then zap-all-shadow-pages release mmu-lock and wait, other vcpus create page tables again. zap-all-shadow-page need long time to be finished, the worst case is, it can not completed forever on intensive vcpu and memory usage.
I still think the right way to fix this kind of thing is optimization for mmu-lock. ======
Which parts scare you? Let's find a way to optimize for it. ;). For example, if you do not like unmap_memslot_rmap_nolock(), we can simplify it - We can use walk_shadow_page_lockless_begin() and walk_shadow_page_lockless_end() to protect spte instead of kvm->being_unmaped_rmap.
Thanks!
| |