lkml.org 
[lkml]   [2013]   [May]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v7 10/11] KVM: MMU: collapse TLB flushes when zap all pages
    On Thu, May 23, 2013 at 03:55:59AM +0800, Xiao Guangrong wrote:
    > kvm_zap_obsolete_pages uses lock-break technique to zap pages,
    > it will flush tlb every time when it does lock-break
    >
    > We can reload mmu on all vcpus after updating the generation
    > number so that the obsolete pages are not used on any vcpus,
    > after that we do not need to flush tlb when obsolete pages
    > are zapped

    After that point batching is also not relevant anymore?


    Still concerned about a similar case mentioned earlier:

    "
    Note the account for pages freed step after pages are actually
    freed: as discussed with Takuya, having pages freed and freed page
    accounting out of sync across mmu_lock is potentially problematic:
    kvm->arch.n_used_mmu_pages and friends do not reflect reality which can
    cause problems for SLAB freeing and page allocation throttling.
    "

    This is a real problem, if you decrease n_used_mmu_pages at
    kvm_mmu_prepare_zap_page, but only actually free pages later
    at kvm_mmu_commit_zap_page, there is the possibility of allowing
    a huge number to be retained. There should be a maximum number of pages
    at invalid_list.

    (even higher possibility if you schedule without freeing pages reported
    as released!).

    > Note: kvm_mmu_commit_zap_page is still needed before free
    > the pages since other vcpus may be doing locklessly shadow
    > page walking
    >
    > Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
    > ---
    > arch/x86/kvm/mmu.c | 32 ++++++++++++++++++++++----------
    > 1 files changed, 22 insertions(+), 10 deletions(-)
    >
    > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
    > index e676356..5e34056 100644
    > --- a/arch/x86/kvm/mmu.c
    > +++ b/arch/x86/kvm/mmu.c
    > @@ -4237,8 +4237,6 @@ static void kvm_zap_obsolete_pages(struct kvm *kvm)
    > restart:
    > list_for_each_entry_safe_reverse(sp, node,
    > &kvm->arch.active_mmu_pages, link) {
    > - int ret;
    > -
    > /*
    > * No obsolete page exists before new created page since
    > * active_mmu_pages is the FIFO list.
    > @@ -4254,21 +4252,24 @@ restart:
    > if (sp->role.invalid)
    > continue;
    >
    > + /*
    > + * Need not flush tlb since we only zap the sp with invalid
    > + * generation number.
    > + */
    > if (batch >= BATCH_ZAP_PAGES &&
    > - (need_resched() || spin_needbreak(&kvm->mmu_lock))) {
    > + cond_resched_lock(&kvm->mmu_lock)) {
    > batch = 0;
    > - kvm_mmu_commit_zap_page(kvm, &invalid_list);
    > - cond_resched_lock(&kvm->mmu_lock);
    > goto restart;
    > }
    >
    > - ret = kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list);
    > - batch += ret;
    > -
    > - if (ret)
    > - goto restart;
    > + batch += kvm_mmu_prepare_zap_obsolete_page(kvm, sp,
    > + &invalid_list);
    > }
    >
    > + /*
    > + * Should flush tlb before free page tables since lockless-walking
    > + * may use the pages.
    > + */
    > kvm_mmu_commit_zap_page(kvm, &invalid_list);
    > }
    >
    > @@ -4287,6 +4288,17 @@ void kvm_mmu_invalidate_zap_all_pages(struct kvm *kvm)
    > trace_kvm_mmu_invalidate_zap_all_pages(kvm);
    > kvm->arch.mmu_valid_gen++;
    >
    > + /*
    > + * Notify all vcpus to reload its shadow page table
    > + * and flush TLB. Then all vcpus will switch to new
    > + * shadow page table with the new mmu_valid_gen.
    > + *
    > + * Note: we should do this under the protection of
    > + * mmu-lock, otherwise, vcpu would purge shadow page
    > + * but miss tlb flush.
    > + */
    > + kvm_reload_remote_mmus(kvm);
    > +
    > kvm_zap_obsolete_pages(kvm);
    > spin_unlock(&kvm->mmu_lock);
    > }
    > --
    > 1.7.7.6


    \
     
     \ /
      Last update: 2013-05-28 16:01    [W:3.691 / U:0.044 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site