lkml.org 
[lkml]   [2023]   [Sep]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH v13 17/22] x86/kexec: Flush cache of TDX private memory
Date
On Fri, 2023-09-15 at 10:50 -0700, Dave Hansen wrote:
> On 9/15/23 10:43, Edgecombe, Rick P wrote:
> > On Sat, 2023-08-26 at 00:14 +1200, Kai Huang wrote:
> > > There are two problems in terms of using kexec() to boot to a new
> > > kernel when the old kernel has enabled TDX: 1) Part of the memory
> > > pages are still TDX private pages; 2) There might be dirty
> > > cachelines associated with TDX private pages.
> > Does TDX support hibernate?
> No.
>
> There's a whole bunch of volatile state that's generated inside the CPU
> and never leaves the CPU, like the ephemeral key that protects TDX
> module memory.
>
> SGX, for instance, never even supported suspend, IIRC. Enclaves just
> die and have to be rebuilt.

Right. AFAICT TDX cannot survive from S3 either. All TDX keys get lost when
system enters S3. However I don't think TDX can be rebuilt after resume like
SGX. Let me confirm with TDX guys on this.

I think we can register syscore_ops->suspend for TDX, and refuse to suspend when
TDX is enabled. This covers hibernate case too.

In terms of how to check "TDX is enabled", ideally it's better to check whether
TDX module is actually initialized, but the worst case is we can use
platform_tdx_enabled(). (I need to think more on this)

Hi Dave, Kirill, Rick,

Is this solution overall acceptable?
\
 
 \ /
  Last update: 2023-09-18 14:10    [W:0.065 / U:2.808 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site