lkml.org 
[lkml]   [2022]   [Apr]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v3 00/21] TDX host kernel support
On Wed, Apr 27, 2022 at 6:21 PM Kai Huang <kai.huang@intel.com> wrote:
>
> On Wed, 2022-04-27 at 18:01 -0700, Dan Williams wrote:
> > On Tue, Apr 26, 2022 at 1:10 PM Dave Hansen <dave.hansen@intel.com> wrote:
> > [..]
> > > > 3. Memory hotplug
> > > >
> > > > The first generation of TDX architecturally doesn't support memory
> > > > hotplug. And the first generation of TDX-capable platforms don't support
> > > > physical memory hotplug. Since it physically cannot happen, this series
> > > > doesn't add any check in ACPI memory hotplug code path to disable it.
> > > >
> > > > A special case of memory hotplug is adding NVDIMM as system RAM using
> >
> > Saw "NVDIMM" mentioned while browsing this, so stopped to make a comment...
> >
> > > > kmem driver. However the first generation of TDX-capable platforms
> > > > cannot enable TDX and NVDIMM simultaneously, so in practice this cannot
> > > > happen either.
> > >
> > > What prevents this code from today's code being run on tomorrow's
> > > platforms and breaking these assumptions?
> >
> > The assumption is already broken today with NVDIMM-N. The lack of
> > DDR-T support on TDX enabled platforms has zero effect on DDR-based
> > persistent memory solutions. In other words, please describe the
> > actual software and hardware conflicts at play here, and do not make
> > the mistake of assuming that "no DDR-T support on TDX platforms" ==
> > "no NVDIMM support".
>
> Sorry I got this information from planning team or execution team I guess. I was
> told NVDIMM and TDX cannot "co-exist" on the first generation of TDX capable
> machine. "co-exist" means they cannot be turned on simultaneously on the same
> platform. I am also not aware NVDIMM-N, nor the difference between DDR based
> and DDR-T based persistent memory. Could you give some more background here so
> I can take a look?

My rough understanding is that TDX makes use of metadata communicated
"on the wire" for DDR, but that infrastructure is not there for DDR-T.
However, there are plenty of DDR based NVDIMMs that use super-caps /
batteries and flash to save contents. I believe the concern for TDX is
that the kernel needs to know not use TDX accepted PMEM as PMEM
because the contents saved by the DIMM's onboard energy source are
unreadable outside of a TD.

Here is one of the links that comes up in a search for NVDIMM-N.

https://www.snia.org/educational-library/what-you-can-do-nvdimm-n-and-nvdimm-p-2019

>
> >
> > > > Another case is admin can use 'memmap' kernel command line to create
> > > > legacy PMEMs and use them as TD guest memory, or theoretically, can use
> > > > kmem driver to add them as system RAM. To avoid having to change memory
> > > > hotplug code to prevent this from happening, this series always include
> > > > legacy PMEMs when constructing TDMRs so they are also TDX memory.
> >
> > I am not sure what you are trying to say here?
>
> We want to always make sure the memory managed by page allocator is TDX memory.

That only seems possible if the kernel is given a TDX capable physical
address map at the beginning of time.

> So if the legacy PMEMs are unconditionally configured as TDX memory, then we
> don't need to prevent them from being added as system memory via kmem driver.

I think that is too narrow of a focus.

Does a memory map exist for the physical address ranges that are TDX
capable? Please don't say EFI_MEMORY_CPU_CRYPTO, as that single bit is
ambiguous beyond the point of utility across the industry's entire
range of confidential computing memory capabilities.

One strawman would be an ACPI table with contents like:

struct acpi_protected_memory {
struct range range;
uuid_t platform_mem_crypto_capability;
};

With some way to map those uuids to a set of platform vendor specific
constraints and specifications. Some would be shared across
confidential computing vendors, some might be unique. Otherwise, I do
not see how you enforce the expectation of "all memory in the page
allocator is TDX capable". The other alternative is that *none* of the
memory in the page allocator is TDX capable and a special memory
allocation device is used to map memory for TDs. In either case a map
of all possible TDX memory is needed and the discussion above seems
like an incomplete / "hopeful" proposal about the memory dax_kmem, or
other sources, might online. See the CXL CEDT CFWMS (CXL Fixed Memory
Window Structure) as an example of an ACPI table that sets the
kernel's expectations about how a physical address range might be
used.

https://www.computeexpresslink.org/spec-landing

>
> >
> > > > 4. CPU hotplug
> > > >
> > > > The first generation of TDX architecturally doesn't support ACPI CPU
> > > > hotplug. All logical cpus are enabled by BIOS in MADT table. Also, the
> > > > first generation of TDX-capable platforms don't support ACPI CPU hotplug
> > > > either. Since this physically cannot happen, this series doesn't add any
> > > > check in ACPI CPU hotplug code path to disable it.
> >
> > What are the actual challenges posed to TDX with respect to CPU hotplug?
>
> During the TDX module initialization, there is a step to call SEAMCALL on all
> logical cpus to initialize per-cpu TDX staff. TDX doesn't support initializing
> the new hot-added CPUs after the initialization. There are MCHECK/BIOS changes
> to enforce this check too I guess but I don't know details about this.

Is there an ACPI table that indicates CPU-x passed the check? Or since
the BIOS is invoked in the CPU-online path, is it trusted to suppress
those events for CPUs outside of the mcheck domain?

> > > > Also, only TDX module initialization requires all BIOS-enabled cpus are
> >
> > Please define "BIOS-enabled" cpus. There is no "BIOS-enabled" line in
> > /proc/cpuinfo for example.
>
> It means the CPUs with "enable" bit set in the MADT table.

That just indicates to the present CPUs and then a hot add event
changes the state of now present CPUs to enabled. Per above is the
BIOS responsible for rejecting those new CPUs, or is the kernel?

\
 
 \ /
  Last update: 2022-04-29 05:00    [W:0.952 / U:1.232 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site