lkml.org 
[lkml]   [2022]   [Mar]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [RFC PATCH] KVM: x86/mmu: fix general protection fault in kvm_mmu_uninit_tdp_mmu
From
Le 25/03/2022 à 17:38, Pavel Skripkin a écrit :
> Syzbot reported GPF in kvm_mmu_uninit_tdp_mmu(), which is caused by
> passing NULL pointer to flush_workqueue().
>
> tdp_mmu_zap_wq is allocated via alloc_workqueue() which may fail. There
> is no error hanling and kvm_mmu_uninit_tdp_mmu() return value is simply
> ignored. Even all kvm_*_init_vm() functions are void, so the easiest
> solution is to check that tdp_mmu_zap_wq is valid pointer before passing
> it somewhere.
>
> Fixes: 22b94c4b63eb ("KVM: x86/mmu: Zap invalidated roots via asynchronous worker")
> Reported-and-tested-by: syzbot+717ed82268812a643b28@syzkaller.appspotmail.com
> Signed-off-by: Pavel Skripkin <paskripkin@gmail.com>
> ---
> arch/x86/kvm/mmu/tdp_mmu.c | 14 +++++++++-----
> 1 file changed, 9 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index e7e7876251b3..b3e8ff7ac5b0 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -48,8 +48,10 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm)
> if (!kvm->arch.tdp_mmu_enabled)
> return;
>
> - flush_workqueue(kvm->arch.tdp_mmu_zap_wq);
> - destroy_workqueue(kvm->arch.tdp_mmu_zap_wq);
> + if (kvm->arch.tdp_mmu_zap_wq) {
> + flush_workqueue(kvm->arch.tdp_mmu_zap_wq);
> + destroy_workqueue(kvm->arch.tdp_mmu_zap_wq);

Hi,
unrelated to the patch, but flush_workqueue() is redundant and could be
removed.
destroy_workqueue() already drains the queue.

Just my 2c,
CJ

> + }
>
> WARN_ON(!list_empty(&kvm->arch.tdp_mmu_pages));
> WARN_ON(!list_empty(&kvm->arch.tdp_mmu_roots));
> @@ -119,9 +121,11 @@ static void tdp_mmu_zap_root_work(struct work_struct *work)
>
> static void tdp_mmu_schedule_zap_root(struct kvm *kvm, struct kvm_mmu_page *root)
> {
> - root->tdp_mmu_async_data = kvm;
> - INIT_WORK(&root->tdp_mmu_async_work, tdp_mmu_zap_root_work);
> - queue_work(kvm->arch.tdp_mmu_zap_wq, &root->tdp_mmu_async_work);
> + if (kvm->arch.tdp_mmu_zap_wq) {
> + root->tdp_mmu_async_data = kvm;
> + INIT_WORK(&root->tdp_mmu_async_work, tdp_mmu_zap_root_work);
> + queue_work(kvm->arch.tdp_mmu_zap_wq, &root->tdp_mmu_async_work);
> + }
> }
>
> static inline bool kvm_tdp_root_mark_invalid(struct kvm_mmu_page *page)

\
 
 \ /
  Last update: 2022-03-25 17:47    [W:0.043 / U:0.252 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site