lkml.org 
[lkml]   [2022]   [Aug]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v1 1/5] KVM: arm64: Enable ring-based dirty memory tracking
On Fri, 26 Aug 2022 11:58:08 +0100,
Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 8/23/22 22:35, Marc Zyngier wrote:
> >> Heh, yeah I need to get that out the door. I'll also note that Gavin's
> >> changes are still relevant without that series, as we do write unprotect
> >> in parallel at PTE granularity after commit f783ef1c0e82 ("KVM: arm64:
> >> Add fast path to handle permission relaxation during dirty logging").
> >
> > Ah, true. Now if only someone could explain how the whole
> > producer-consumer thing works without a trace of a barrier, that'd be
> > great...
>
> Do you mean this?
>
> void kvm_dirty_ring_push(struct kvm_dirty_ring *ring, u32 slot, u64 offset)

Of course not. I mean this:

static int kvm_vm_ioctl_reset_dirty_pages(struct kvm *kvm)
{
unsigned long i;
struct kvm_vcpu *vcpu;
int cleared = 0;

if (!kvm->dirty_ring_size)
return -EINVAL;

mutex_lock(&kvm->slots_lock);

kvm_for_each_vcpu(i, vcpu, kvm)
cleared += kvm_dirty_ring_reset(vcpu->kvm, &vcpu->dirty_ring);
[...]
}

and this

int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring)
{
u32 cur_slot, next_slot;
u64 cur_offset, next_offset;
unsigned long mask;
int count = 0;
struct kvm_dirty_gfn *entry;
bool first_round = true;

/* This is only needed to make compilers happy */
cur_slot = cur_offset = mask = 0;

while (true) {
entry = &ring->dirty_gfns[ring->reset_index & (ring->size - 1)];

if (!kvm_dirty_gfn_harvested(entry))
break;
[...]

}

which provides no ordering whatsoever when a ring is updated from one
CPU and reset from another.

M.

--
Without deviation from the norm, progress is not possible.

\
 
 \ /
  Last update: 2022-08-26 17:31    [W:0.089 / U:0.804 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site