Messages in this thread | | | Date | Fri, 3 May 2024 12:34:07 +0200 | From | Borislav Petkov <> | Subject | Re: [PATCH v4 05/15] x86/sev: Use kernel provided SVSM Calling Areas |
| |
This one would need multiple review mails.
Lemme make this part 1.
On Wed, Apr 24, 2024 at 10:58:01AM -0500, Tom Lendacky wrote: > arch/x86/include/asm/sev-common.h | 13 ++ > arch/x86/include/asm/sev.h | 32 +++++ > arch/x86/include/uapi/asm/svm.h | 1 + > arch/x86/kernel/sev-shared.c | 94 +++++++++++++- > arch/x86/kernel/sev.c | 207 +++++++++++++++++++++++++-----
Ok, now would be as good time as any to start moving the SEV guest bits to where we want them to live:
arch/x86/coco/sev/
so pls add the new SVSM guest support bits there:
arch/x86/coco/sev/svsm.c arch/x86/coco/sev/svsm-shared.c
I guess.
And things which touch sev.c and sev-shared.c will have to add patches which move bits to the new location.
> arch/x86/mm/mem_encrypt_amd.c | 8 +- > 6 files changed, 320 insertions(+), 35 deletions(-) > > diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h > index 1225744a069b..4cc716660d4b 100644 > --- a/arch/x86/include/asm/sev-common.h > +++ b/arch/x86/include/asm/sev-common.h > @@ -96,6 +96,19 @@ enum psc_op { > /* GHCBData[63:32] */ \ > (((u64)(val) & GENMASK_ULL(63, 32)) >> 32) > > +/* GHCB Run at VMPL Request/Response */
Run?
> +#define GHCB_MSR_VMPL_REQ 0x016 > +#define GHCB_MSR_VMPL_REQ_LEVEL(v) \ > + /* GHCBData[39:32] */ \ > + (((u64)(v) & GENMASK_ULL(7, 0) << 32) | \ > + /* GHCBDdata[11:0] */ \ > + GHCB_MSR_VMPL_REQ) > + > +#define GHCB_MSR_VMPL_RESP 0x017 > +#define GHCB_MSR_VMPL_RESP_VAL(v) \ > + /* GHCBData[63:32] */ \ > + (((u64)(v) & GENMASK_ULL(63, 32)) >> 32) > + > /* GHCB Hypervisor Feature Request/Response */ > #define GHCB_MSR_HV_FT_REQ 0x080 > #define GHCB_MSR_HV_FT_RESP 0x081
..
> diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c > index 46ea4e5e118a..6f57eb804e70 100644 > --- a/arch/x86/kernel/sev-shared.c > +++ b/arch/x86/kernel/sev-shared.c > @@ -18,9 +18,11 @@ > #define sev_printk_rtl(fmt, ...) printk_ratelimited(fmt, ##__VA_ARGS__) > #else > #undef WARN > -#define WARN(condition, format...) (!!(condition)) > +#define WARN(condition, format...) (!!(condition)) > #define sev_printk(fmt, ...) > #define sev_printk_rtl(fmt, ...) > +#undef vc_forward_exception > +#define vc_forward_exception(c) panic("SNP: Hypervisor requested exception\n") > #endif > > /* > @@ -244,6 +246,96 @@ static enum es_result verify_exception_info(struct ghcb *ghcb, struct es_em_ctxt > return ES_VMM_ERROR; > } > > +static __always_inline void issue_svsm_call(struct svsm_call *call, u8 *pending)
svsm_issue_call()
I guess.
> +{ > + /* > + * Issue the VMGEXIT to run the SVSM:
.. to call the SVSM:" I guess.
> + * - Load the SVSM register state (RAX, RCX, RDX, R8 and R9) > + * - Set the CA call pending field to 1 > + * - Issue VMGEXIT > + * - Save the SVSM return register state (RAX, RCX, RDX, R8 and R9) > + * - Perform atomic exchange of the CA call pending field > + */
That goes above the function name.
> + asm volatile("mov %9, %%r8\n\t" > + "mov %10, %%r9\n\t" > + "movb $1, %11\n\t" > + "rep; vmmcall\n\t" > + "mov %%r8, %3\n\t" > + "mov %%r9, %4\n\t" > + "xchgb %5, %11\n\t" > + : "=a" (call->rax_out), "=c" (call->rcx_out), "=d" (call->rdx_out), > + "=m" (call->r8_out), "=m" (call->r9_out), > + "+r" (*pending) > + : "a" (call->rax), "c" (call->rcx), "d" (call->rdx), > + "r" (call->r8), "r" (call->r9), > + "m" (call->caa->call_pending) > + : "r8", "r9", "memory"); > +}
Btw, where are we documenting this calling convention?
Anyway, I think you can do it this way (pasting the whole thing for easier review):
static __always_inline void issue_svsm_call(struct svsm_call *call, u8 *pending) { register unsigned long r8 asm("r8") = call->r8; register unsigned long r9 asm("r9") = call->r9;
call->caa->call_pending = 1;
/* * Issue the VMGEXIT to run the SVSM: * - Load the SVSM register state (RAX, RCX, RDX, R8 and R9) * - Set the CA call pending field to 1 * - Issue VMGEXIT * - Save the SVSM return register state (RAX, RCX, RDX, R8 and R9) * - Perform atomic exchange of the CA call pending field */ asm volatile("rep; vmmcall\n\t" "xchgb %[pending], %[call_pending]" : "=a" (call->rax_out), "=c" (call->rcx_out), "=d" (call->rdx_out), [pending] "+r" (*pending), "+r" (r8), "+r" (r9) : "a" (call->rax), "c" (call->rcx), "d" (call->rdx), [call_pending] "m" (call->caa->call_pending) : "memory");
call->r8_out = r8; call->r9_out = r9; }
I *think* the asm is the same but it needs more looking in detail. It probably could be simplified even more.
-- Regards/Gruss, Boris.
https://people.kernel.org/tglx/notes-about-netiquette
| |