lkml.org 
[lkml]   [2020]   [May]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] KVM: arm/arm64: release kvm->mmu_lock in loop to prevent starvation
From
Date
On 04/15/2020 09:42 AM, Jiang Yi wrote:
> Do cond_resched_lock() in stage2_flush_memslot() like what is done in
> unmap_stage2_range() and other places holding mmu_lock while processing
> a possibly large range of memory.
>
> Signed-off-by: Jiang Yi <giangyi@amazon.com>
> ---
> virt/kvm/arm/mmu.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index e3b9ee268823..7315af2c52f8 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -417,16 +417,19 @@ static void stage2_flush_memslot(struct kvm *kvm,
> phys_addr_t next;
> pgd_t *pgd;
>
> pgd = kvm->arch.pgd + stage2_pgd_index(kvm, addr);
> do {
> next = stage2_pgd_addr_end(kvm, addr, end);
> if (!stage2_pgd_none(kvm, *pgd))
> stage2_flush_puds(kvm, pgd, addr, next);
> +
> + if (next != end)
> + cond_resched_lock(&kvm->mmu_lock);
> } while (pgd++, addr = next, addr != end);
> }

Given that this is called under the srcu_lock this looks
good to me:

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>

>
> /**
> * stage2_flush_vm - Invalidate cache for pages mapped in stage 2
> * @kvm: The struct kvm pointer
> *
> * Go through the stage 2 page tables and invalidate any cache lines
>

\
 
 \ /
  Last update: 2020-05-07 12:59    [W:0.024 / U:0.536 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site