lkml.org 
[lkml]   [2021]   [Nov]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH v5 01/13] KVM: x86: Cache total page count to avoid traversing the memslot array
Date
On 01.11.2021 23:29, Sean Christopherson wrote:
> On Wed, Oct 20, 2021, Sean Christopherson wrote:
>> On Wed, Oct 20, 2021, Maciej S. Szmigiero wrote:
>>> On 20.10.2021 00:24, Sean Christopherson wrote:
>>>> E.g. the whole thing can be
>>>>
>>>> if (!kvm->arch.n_requested_mmu_pages &&
>>>> (change == KVM_MR_CREATE || change == KVM_MR_DELETE)) {
>>>> unsigned long nr_mmu_pages;
>>>>
>>>> if (change == KVM_MR_CREATE) {
>>>> kvm->arch.n_memslots_pages += new->npages;
>>>> } else {
>>>> WARN_ON(kvm->arch.n_memslots_pages < old->npages);
>>>> kvm->arch.n_memslots_pages -= old->npages;
>>>> }
>>>>
>>>> nr_mmu_pages = (unsigned long)kvm->arch.n_memslots_pages;
>>>> nr_mmu_pages *= (KVM_PERMILLE_MMU_PAGES / 1000);
>>>
>>> The above line will set nr_mmu_pages to zero since KVM_PERMILLE_MMU_PAGES
>>> is 20, so when integer-divided by 1000 will result in a multiplication
>>> coefficient of zero.
>>
>> Ugh, math. And thus do_div() to avoid the whole 64-bit divide issue on 32-bit KVM.
>> Bummer.
>
> I was revisiting this today because (a) simply making n_memslots_pages a u64 doesn't
> cleanly handle the case where the resulting nr_mmu_pages would wrap,

Handling this case without capping total n_memslots_pages would require
either capping memslots_pages on 32-bit KVM to make it fit in 32-bits or
changing kvm_mmu_change_mmu_pages() and all the logic further down to
accept u64's.

> (b) any fix
> in that are should really go in a separate patch to fix
> kvm_mmu_calculate_default_mmu_pages() and then carry that behavior forward
>
> But as I dove deeper (and deeper), I came to the conclusion that supporting a
> total number of memslot pages that doesn't fit in an unsigned long is a waste of
> our time. With a 32-bit kernel, userspace can at most address 3gb of virtual
> memory, whereas wrapping the total number of pages would require 4tb+ of guest
> physical memory. Even with x86's second address space for SMM, that means userspace
> would need to alias all of guest memory more than one _thousand_ times. And on
> older hardware with MAXPHYADDR < 43, the guest couldn't actually access any of those
> aliases even if userspace lied about guest.MAXPHYADDR.
>
> So unless I'm missing something, or PPC or MIPS has some crazy way for a 32-bit
> host to support 4TB of guest memory, my vote would be to explicitly disallow
> creating more memslot pages than can fit in an unsigned long. Then x86 KVM could
> reuse the cache nr_memslot_pages and x86's MMU wouldn't have to update a big pile
> of code to support a scenario that practically speaking is useless.
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 72b329e82089..acabdbdef5cf 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -552,6 +552,7 @@ struct kvm {
> */
> struct mutex slots_arch_lock;
> struct mm_struct *mm; /* userspace tied to this vm */
> + unsigned long nr_memslot_pages;
> struct kvm_memslots __rcu *memslots[KVM_ADDRESS_SPACE_NUM];
> struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];
>
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 8bf4b89cfa03..c63fc5c05322 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1567,6 +1567,15 @@ static void kvm_commit_memory_region(struct kvm *kvm,
> const struct kvm_memory_slot *new,
> enum kvm_mr_change change)
> {
> + /*
> + * Update the total number of memslot pages before calling the arch
> + * hook so that architectures can consume the result directly.
> + */
> + if (change == KVM_MR_DELETE)
> + kvm->nr_memslot_pages -= old->npages;
> + else if (change == KVM_MR_CREATE)
> + kvm->nr_memslot_pages += new->npages;
> +
> kvm_arch_commit_memory_region(kvm, old, new, change);
>
> /*
> @@ -1738,6 +1747,9 @@ int __kvm_set_memory_region(struct kvm *kvm,
> if (!old || !old->npages)
> return -EINVAL;
>
> + if (WARN_ON_ONCE(kvm->nr_memslot_pages < old->npages))
> + return -EIO;
> +
> memset(&new, 0, sizeof(new));
> new.id = id;
> new.as_id = as_id;
> @@ -1756,6 +1768,13 @@ int __kvm_set_memory_region(struct kvm *kvm,
>
> if (!old || !old->npages) {
> change = KVM_MR_CREATE;
> +
> + /*
> + * To simplify KVM internals, the total number of pages across
> + * all memslots must fit in an unsigned long.
> + */
> + if ((kvm->nr_memslot_pages + new.npages) < kvm->nr_memslot_pages)
> + return -EINVAL;
> } else { /* Modify an existing slot. */
> if ((new.userspace_addr != old->userspace_addr) ||
> (new.npages != old->npages) ||
>

Capping total n_memslots_pages makes sense to me to avoid the (existing)
nr_mmu_pages wraparound issue, will update the next patchset version
accordingly.

Thanks,
Maciej

\
 
 \ /
  Last update: 2021-11-03 13:00    [W:0.158 / U:0.712 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site