lkml.org 
[lkml]   [2020]   [Jun]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 09/12] KVM: arm64: Steply write protect page table by mask bit
    Date
    During dirty log clear, page table entries are write protected
    according to a mask. In the past we write protect all entries
    corresponding to the mask from ffs to fls. Though there may be
    zero bits between this range, we are holding the kvm mmu lock
    so we won't write protect entries that we don't want to.

    We are about to add support for hardware management of dirty state
    to arm64, holding kvm mmu lock will be not enough. We should write
    protect entries steply by mask bit.

    Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
    ---
    arch/arm64/kvm/mmu.c | 12 +++++++++---
    1 file changed, 9 insertions(+), 3 deletions(-)

    diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
    index 3aa0303d83f0..898e272a2c07 100644
    --- a/arch/arm64/kvm/mmu.c
    +++ b/arch/arm64/kvm/mmu.c
    @@ -1710,10 +1710,16 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
    gfn_t gfn_offset, unsigned long mask)
    {
    phys_addr_t base_gfn = slot->base_gfn + gfn_offset;
    - phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT;
    - phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT;
    + phys_addr_t start, end;
    + u32 i;

    - stage2_wp_range(kvm, start, end);
    + for (i = __ffs(mask); i <= __fls(mask); i++) {
    + if (test_bit_le(i, &mask)) {
    + start = (base_gfn + i) << PAGE_SHIFT;
    + end = (base_gfn + i + 1) << PAGE_SHIFT;
    + stage2_wp_range(kvm, start, end);
    + }
    + }
    }

    /*
    --
    2.19.1
    \
     
     \ /
      Last update: 2020-06-16 11:37    [W:3.650 / U:0.008 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site