lkml.org 
[lkml]   [2020]   [Dec]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH v2 2/3] KVM: arm64: Fix handling of merging tables into a block entry
Date
In dirty logging case(logging_active == True), we need to collapse a
block entry into a table if necessary. After dirty logging is canceled,
when merging tables back into a block entry, we should not only free
the non-huge page-table pages but also invalidate all the TLB entries of
non-huge mappings for the block. Without enough TLBI, multiple TLB entries
for the memory in the block will be cached.

Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
---
arch/arm64/kvm/hyp/pgtable.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index b232bdd142a6..23a01dfcb27a 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -496,7 +496,13 @@ static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 level,
return 0;

kvm_set_invalid_pte(ptep);
- kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, 0);
+
+ /*
+ * Invalidate the whole stage-2, as we may have numerous leaf
+ * entries below us which would otherwise need invalidating
+ * individually.
+ */
+ kvm_call_hyp(__kvm_tlb_flush_vmid, data->mmu);
data->anchor = ptep;
return 0;
}
--
2.19.1
\
 
 \ /
  Last update: 2020-12-01 21:13    [W:0.068 / U:0.596 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site