lkml.org 
[lkml]   [2022]   [Jul]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v7 000/102] KVM TDX basic feature support
From
On 7/20/2022 8:29 PM, Chao Peng wrote:
> On Thu, Jul 14, 2022 at 01:03:46AM +0000, Sean Christopherson wrote:
> ...
>>
>> Option D). track shared regions in an Xarray, update kvm_arch_memory_slot.lpage_info
>> on insertion/removal to (dis)allow hugepages as needed.
>>
>> + efficient on KVM page fault (no new lookups)
>> + zero memory overhead (assuming KVM has to eat the cost of the Xarray anyways)
>> + straightforward to implement
>> + can (and should) be merged as part of the UPM series
>>
>> I believe xa_for_each_range() can be used to see if a given 2mb/1gb range is
>> completely covered (fully shared) or not covered at all (fully private), but I'm
>> not 100% certain that xa_for_each_range() works the way I think it does.
>
> Hi Sean,
>
> Below is the implementation to support 2M as you mentioned as option D.
> It's based on UPM v7 xarray code: https://lkml.org/lkml/2022/7/6/259
>
> Everything sounds good, the only trick bit is inc/dec disallow_lpage. If
> we still treat it as a count, it will be a challenge to make the inc/dec
> balanced. So in this patch I stole a bit for the purpose, looks ugly.
>
> Any feedback is welcome.
>
> Thanks,
> Chao
>
> -----------------------------------------------------------------------
> From: Chao Peng <chao.p.peng@linux.intel.com>
> Date: Wed, 20 Jul 2022 11:37:18 +0800
> Subject: [PATCH] KVM: Add large page support for private memory
>
> Update lpage_info when handling KVM_MEMORY_ENCRYPT_{UN,}REG_REGION.
>
> Reserve a bit in disallow_lpage to indicate a large page has
> private/share pages mixed.
>
> Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
> ---


> +static void update_mem_lpage_info(struct kvm *kvm,
> + struct kvm_memory_slot *slot,
> + unsigned int attr,
> + gfn_t start, gfn_t end)
> +{
> + unsigned long lpage_start, lpage_end;
> + unsigned long gfn, pages, mask;
> + int level;
> +
> + for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) {
> + pages = KVM_PAGES_PER_HPAGE(level);
> + mask = ~(pages - 1);
> + lpage_start = start & mask;
> + lpage_end = end & mask;
> +
> + /*
> + * We only need to scan the head and tail page, for middle pages
> + * we know they are not mixed.
> + */
> + update_mixed(lpage_info_slot(lpage_start, slot, level),
> + mem_attr_is_mixed(kvm, attr, lpage_start,
> + lpage_start + pages));
> +
> + if (lpage_start == lpage_end)
> + return;
> +
> + for (gfn = lpage_start + pages; gfn < lpage_end; gfn += pages) {
> + update_mixed(lpage_info_slot(gfn, slot, level), false);
> + }

Boundary check missing here for the case when gfn reaches lpage_end.

if (gfn == lpage_end)
return;

> +
> + update_mixed(lpage_info_slot(lpage_end, slot, level),
> + mem_attr_is_mixed(kvm, attr, lpage_end,
> + lpage_end + pages));
> + }
> +}

Regards
Nikunj

\
 
 \ /
  Last update: 2022-07-25 15:47    [W:0.266 / U:0.364 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site