lkml.org 
[lkml]   [2022]   [Aug]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v1 1/5] KVM: arm64: Enable ring-based dirty memory tracking
From
On 8/26/22 17:49, Marc Zyngier wrote:
>> Agreed, but that's a problem for userspace to solve. If userspace
>> wants to reset the fields in different CPUs, it has to synchronize
>> with its own invoking of the ioctl.
>
> userspace has no choice. It cannot order on its own the reads that the
> kernel will do to *other* rings.

Those reads will never see KVM_DIRTY_GFN_F_RESET in the flags however,
if userspace has never interacted with the ring. So there will be
exactly one read on those rings, and there's nothing to reorder.

If that's too tricky and you want to add a load-acquire I have no
objection though. It also helps avoiding read-read reordering between
one entry's flags to the next one's, so it's a good idea to have it anyway.

>> The main reason why I preferred a global KVM_RESET_DIRTY_RINGS ioctl
>> was because it takes kvm->slots_lock so the execution would be
>> serialized anyway. Turning slots_lock into an rwsem would be even
>> worse because it also takes kvm->mmu_lock (since slots_lock is a
>> mutex, at least two concurrent invocations won't clash with each other
>> on the mmu_lock).
>
> Whatever the reason, the behaviour should be identical on all
> architectures. As is is, it only really works on x86, and I contend
> this is a bug that needs fixing.
>
> Thankfully, this can be done at zero cost for x86, and at that of a
> set of load-acquires on other architectures.

Yes, the global-ness of the API is orthogonal to the memory ordering
issue. I just wanted to explain why a per-vCPU API probably isn't going
to work great.

Paolo

\
 
 \ /
  Last update: 2022-08-27 10:28    [W:0.077 / U:0.228 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site