lkml.org 
[lkml]   [2013]   [Apr]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v3 00/15] KVM: MMU: fast zap all shadow pages
    On 04/21/2013 11:24 PM, Marcelo Tosatti wrote:
    > On Sun, Apr 21, 2013 at 10:09:29PM +0800, Xiao Guangrong wrote:
    >> On 04/21/2013 09:03 PM, Gleb Natapov wrote:
    >>> On Tue, Apr 16, 2013 at 02:32:38PM +0800, Xiao Guangrong wrote:
    >>>> This patchset is based on my previous two patchset:
    >>>> [PATCH 0/2] KVM: x86: avoid potential soft lockup and unneeded mmu reload
    >>>> (https://lkml.org/lkml/2013/4/1/2)
    >>>>
    >>>> [PATCH v2 0/6] KVM: MMU: fast invalid all mmio sptes
    >>>> (https://lkml.org/lkml/2013/4/1/134)
    >>>>
    >>>> Changlog:
    >>>> V3:
    >>>> completely redesign the algorithm, please see below.
    >>>>
    >>> This looks pretty complicated. Is it still needed in order to avoid soft
    >>> lockups after "avoid potential soft lockup and unneeded mmu reload" patch?
    >>
    >> Yes.
    >>
    >> I discussed this point with Marcelo:
    >>
    >> ======
    >> BTW, to my honest, i do not think spin_needbreak is a good way - it does
    >> not fix the hot-lock contention and it just occupies more cpu time to avoid
    >> possible soft lock-ups.
    >>
    >> Especially, zap-all-shadow-pages can let other vcpus fault and vcpus contest
    >> mmu-lock, then zap-all-shadow-pages release mmu-lock and wait, other vcpus
    >> create page tables again. zap-all-shadow-page need long time to be finished,
    >> the worst case is, it can not completed forever on intensive vcpu and memory
    >> usage.
    >>
    >> I still think the right way to fix this kind of thing is optimization for
    >> mmu-lock.
    >> ======
    >>
    >> Which parts scare you? Let's find a way to optimize for it. ;). For example,
    >> if you do not like unmap_memslot_rmap_nolock(), we can simplify it - We can
    >> use walk_shadow_page_lockless_begin() and walk_shadow_page_lockless_end() to
    >> protect spte instead of kvm->being_unmaped_rmap.
    >>
    >> Thanks!
    >
    > Xiao,
    >
    > You can just remove all shadow rmaps now that you've agreed per-memslot
    > flushes are not necessary. Which then gets rid of necessity for lockless
    > rmap accesses. Right?

    Hi Marcelo,

    I am worried about:

    ======
    We can not release all rmaps. If we do this, ->invalidate_page and
    ->invalidate_range_start can not find any spte using the host page,
    that means, Accessed/Dirty for host page is missing tracked.
    (missing call kvm_set_pfn_accessed and kvm_set_pfn_dirty properly.)

    [https://lkml.org/lkml/2013/4/18/358]
    ======

    Do you think this is a issue? What's your idea?

    Thanks!



    \
     
     \ /
      Last update: 2013-04-22 05:21    [W:5.044 / U:0.116 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site