lkml.org 
[lkml]   [2022]   [Mar]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v4 21/30] KVM: x86/mmu: Zap invalidated roots via asynchronous worker
On Sat, Mar 05, 2022, Paolo Bonzini wrote:
> On 3/5/22 01:34, Sean Christopherson wrote:
> > On Fri, Mar 04, 2022, Paolo Bonzini wrote:
> > > On 3/4/22 17:02, Sean Christopherson wrote:
> > > > On Fri, Mar 04, 2022, Paolo Bonzini wrote:
> > > > > On 3/3/22 22:32, Sean Christopherson wrote:
> > > > > I didn't remove the paragraph from the commit message, but I think it's
> > > > > unnecessary now. The workqueue is flushed in kvm_mmu_zap_all_fast() and
> > > > > kvm_mmu_uninit_tdp_mmu(), unlike the buggy patch, so it doesn't need to take
> > > > > a reference to the VM.
> > > > >
> > > > > I think I don't even need to check kvm->users_count in the defunct root
> > > > > case, as long as kvm_mmu_uninit_tdp_mmu() flushes and destroys the workqueue
> > > > > before it checks that the lists are empty.
> > > >
> > > > Yes, that should work. IIRC, the WARN_ONs will tell us/you quite quickly if
> > > > we're wrong :-) mmu_notifier_unregister() will call the "slow" kvm_mmu_zap_all()
> > > > and thus ensure all non-root pages zapped, but "leaking" a worker will trigger
> > > > the WARN_ON that there are no roots on the list.
> > >
> > > Good, for the record these are the commit messages I have:
>
> I'm seeing some hangs in ~50% of installation jobs, both Windows and Linux.
> I have not yet tried to reproduce outside the automated tests, or to bisect,
> but I'll try to push at least the first part of the series for 5.18.

Out of curiosity, what was the bug? I see this got pushed to kvm/next.

\
 
 \ /
  Last update: 2022-03-08 22:30    [W:0.098 / U:1.332 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site