lkml.org 
[lkml]   [2021]   [Dec]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH v5 3/4] KVM: arm64/mmu: use gfn_to_pfn_page
On Mon, Nov 29, 2021, David Stevens wrote:
> @@ -1142,14 +1146,17 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>
> /* Mark the page dirty only if the fault is handled successfully */
> if (writable && !ret) {
> - kvm_set_pfn_dirty(pfn);
> + if (page)
> + kvm_set_pfn_dirty(pfn);

If kvm_set_page_dirty() is changed to be less dumb:

if (page)
kvm_set_page_dirty(page);

> mark_page_dirty_in_slot(kvm, memslot, gfn);
> }
>
> out_unlock:
> spin_unlock(&kvm->mmu_lock);
> - kvm_set_pfn_accessed(pfn);
> - kvm_release_pfn_clean(pfn);
> + if (page) {
> + kvm_set_pfn_accessed(pfn);
> + put_page(page);

Oof, KVM's helpers are stupid. Take a page, convert it to a pfn, then convert it
back to a page, just to mark it dirty or put a ref. Can you fold the below
(completely untested) patch in before the x86/arm64 patches? That way this code
can be:

if (page)
kvm_release_page_accessed(page);

and x86 can do:

if (fault->page)
kvm_release_page_clean(page);

instead of open-coding put_page().


From a8af0c60d7f6e77bbc7310d898211c43ae075cf8 Mon Sep 17 00:00:00 2001
From: Sean Christopherson <seanjc@google.com>
Date: Thu, 30 Dec 2021 11:40:58 -0800
Subject: [PATCH] KVM: Clean up and enhance helpers for releasing pages/pfns

Tweak kvm_release_page_clean() and kvm_release_page_dirty() to avoid
pointlessly converting to a pfn and back to a page, and add an "accessed"
variant that will be used in a future arm64 patch.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
virt/kvm/kvm_main.c | 20 ++++++++++++++------
1 file changed, 14 insertions(+), 6 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 8eb0f762a82c..f75129f641e9 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2876,29 +2876,37 @@ void kvm_release_page_clean(struct page *page)
{
WARN_ON(is_error_page(page));

- kvm_release_pfn_clean(page_to_pfn(page));
+ put_page(page);
}
EXPORT_SYMBOL_GPL(kvm_release_page_clean);

void kvm_release_pfn_clean(kvm_pfn_t pfn)
{
if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn))
- put_page(pfn_to_page(pfn));
+ kvm_release_page_clean(page);
}
EXPORT_SYMBOL_GPL(kvm_release_pfn_clean);

+void kvm_release_page_accessed(struct page *page)
+{
+ mark_page_accessed(page);
+
+ kvm_release_page_clean(page);
+}
+EXPORT_SYMBOL_GPL(kvm_release_page_accessed);
+
void kvm_release_page_dirty(struct page *page)
{
- WARN_ON(is_error_page(page));
+ SetPageDirty(page);

- kvm_release_pfn_dirty(page_to_pfn(page));
+ kvm_release_page_clean(page);
}
EXPORT_SYMBOL_GPL(kvm_release_page_dirty);

void kvm_release_pfn_dirty(kvm_pfn_t pfn)
{
- kvm_set_pfn_dirty(pfn);
- kvm_release_pfn_clean(pfn);
+ if (!kvm_is_reserved_pfn(pfn) && !kvm_is_zone_device_pfn(pfn))
+ kvm_release_page_dirty(pfn_to_page(pfn));
}
EXPORT_SYMBOL_GPL(kvm_release_pfn_dirty);

--
2.34.1.448.ga2b2bfdf31-goog
\
 
 \ /
  Last update: 2021-12-30 20:46    [W:0.191 / U:0.052 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site