lkml.org 
[lkml]   [2020]   [Nov]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH -tip 20/32] entry/kvm: Protect the kernel when entering from guest
Date
From: Vineeth Pillai <viremana@linux.microsoft.com>

Similar to how user to kernel mode transitions are protected in earlier
patches, protect the entry into kernel from guest mode as well.

Tested-by: Julien Desfossez <jdesfossez@digitalocean.com>
Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com>
Signed-off-by: Vineeth Pillai <viremana@linux.microsoft.com>
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
---
arch/x86/kvm/x86.c | 2 ++
include/linux/entry-kvm.h | 12 ++++++++++++
kernel/entry/kvm.c | 33 +++++++++++++++++++++++++++++++++
3 files changed, 47 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 447edc0d1d5a..a50be74f70f1 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8910,6 +8910,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
*/
smp_mb__after_srcu_read_unlock();

+ kvm_exit_to_guest_mode();
/*
* This handles the case where a posted interrupt was
* notified with kvm_vcpu_kick.
@@ -9003,6 +9004,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
}
}

+ kvm_enter_from_guest_mode();
local_irq_enable();
preempt_enable();

diff --git a/include/linux/entry-kvm.h b/include/linux/entry-kvm.h
index 9b93f8584ff7..67da6dcf442b 100644
--- a/include/linux/entry-kvm.h
+++ b/include/linux/entry-kvm.h
@@ -77,4 +77,16 @@ static inline bool xfer_to_guest_mode_work_pending(void)
}
#endif /* CONFIG_KVM_XFER_TO_GUEST_WORK */

+/**
+ * kvm_enter_from_guest_mode - Hook called just after entering kernel from guest.
+ * Caller should ensure interrupts are off.
+ */
+void kvm_enter_from_guest_mode(void);
+
+/**
+ * kvm_exit_to_guest_mode - Hook called just before entering guest from kernel.
+ * Caller should ensure interrupts are off.
+ */
+void kvm_exit_to_guest_mode(void);
+
#endif
diff --git a/kernel/entry/kvm.c b/kernel/entry/kvm.c
index 49972ee99aff..3b603e8bd5da 100644
--- a/kernel/entry/kvm.c
+++ b/kernel/entry/kvm.c
@@ -50,3 +50,36 @@ int xfer_to_guest_mode_handle_work(struct kvm_vcpu *vcpu)
return xfer_to_guest_mode_work(vcpu, ti_work);
}
EXPORT_SYMBOL_GPL(xfer_to_guest_mode_handle_work);
+
+/**
+ * kvm_enter_from_guest_mode - Hook called just after entering kernel from guest.
+ * Caller should ensure interrupts are off.
+ */
+void kvm_enter_from_guest_mode(void)
+{
+ if (!entry_kernel_protected())
+ return;
+ sched_core_unsafe_enter();
+}
+EXPORT_SYMBOL_GPL(kvm_enter_from_guest_mode);
+
+/**
+ * kvm_exit_to_guest_mode - Hook called just before entering guest from kernel.
+ * Caller should ensure interrupts are off.
+ */
+void kvm_exit_to_guest_mode(void)
+{
+ if (!entry_kernel_protected())
+ return;
+ sched_core_unsafe_exit();
+
+ /*
+ * Wait here instead of in xfer_to_guest_mode_handle_work(). The reason
+ * is because in vcpu_run(), xfer_to_guest_mode_handle_work() is called
+ * when a vCPU was either runnable or blocked. However, we only care
+ * about the runnable case (VM entry/exit) which is handled by
+ * vcpu_enter_guest().
+ */
+ sched_core_wait_till_safe(XFER_TO_GUEST_MODE_WORK);
+}
+EXPORT_SYMBOL_GPL(kvm_exit_to_guest_mode);
--
2.29.2.299.gdc1121823c-goog
\
 
 \ /
  Last update: 2020-11-18 00:40    [W:0.606 / U:0.104 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site