lkml.org 
[lkml]   [2020]   [Dec]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH v2 3/3] KVM: arm64: Mark the page dirty only if the fault is handled successfully
    Date
    We now mark the page dirty and set the bitmap before calling fault handlers
    in user_mem_abort(), and we might end up having spurious dirty pages if
    update of permissions or mapping has failed.
    So, mark the page dirty only if the fault is handled successfully.

    Let the guest directly enter again but not return to userspace if we were
    trying to recreate the same mapping or only change access permissions
    with BBM, which is not permitted in the mapping path.

    Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
    ---
    arch/arm64/kvm/mmu.c | 18 ++++++++++++++----
    1 file changed, 14 insertions(+), 4 deletions(-)

    diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
    index 75814a02d189..72e516a10914 100644
    --- a/arch/arm64/kvm/mmu.c
    +++ b/arch/arm64/kvm/mmu.c
    @@ -879,11 +879,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
    if (vma_pagesize == PAGE_SIZE && !force_pte)
    vma_pagesize = transparent_hugepage_adjust(memslot, hva,
    &pfn, &fault_ipa);
    - if (writable) {
    + if (writable)
    prot |= KVM_PGTABLE_PROT_W;
    - kvm_set_pfn_dirty(pfn);
    - mark_page_dirty(kvm, gfn);
    - }

    if (fault_status != FSC_PERM && !device)
    clean_dcache_guest_page(pfn, vma_pagesize);
    @@ -911,6 +908,19 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
    memcache);
    }

    + /* Mark the page dirty only if the fault is handled successfully */
    + if (writable && !ret) {
    + kvm_set_pfn_dirty(pfn);
    + mark_page_dirty(kvm, gfn);
    + }
    +
    + /* Let the guest directly enter again if we were trying to recreate the
    + * same mapping or only change access permissions with BBM, which is not
    + * permitted in the mapping path.
    + */
    + if (ret == -EAGAIN)
    + ret = 0;
    +
    out_unlock:
    spin_unlock(&kvm->mmu_lock);
    kvm_set_pfn_accessed(pfn);
    --
    2.19.1
    \
     
     \ /
      Last update: 2020-12-16 13:31    [W:4.323 / U:0.020 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site