lkml.org 
[lkml]   [2022]   [Jul]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v8 5/5] x86/tdx: Add Quote generation support
From
Date
On Thu, 2022-07-21 at 09:08 -0700, Dave Hansen wrote:
> On 6/8/22 19:52, Kuppuswamy Sathyanarayanan wrote:
> > For shared buffer allocation, alternatives like using the DMA API is
> > also considered. Although it simpler to use, it is not preferred because
> > dma_alloc_*() APIs require a valid bus device as argument, which would
> > need converting the attestation driver into a platform device driver.
> > This is unnecessary, and since the attestation driver does not do real
> > DMA, there is no need to use real DMA APIs.
>
> Let's actually try to walk through the requirements for the memory
> allocation here.
>
> 1. The guest kernel needs to allocate some guest physical memory
> for the attestation data buffer
> 2. The guest physical memory must be mapped by the guest so that
> it can be read/written.
> 3. The guest mapping must be a "TDX Shared" mapping. Since all
> guest physical memory is "TDX Private" by default, something
> must convert the memory from Private->Shared.
> 4. If there are alias mappings with "TDX Private" page table
> permissions, those mappings must never be used while the page is
> in its shared state.
> 4a. load_unaligned_zeropad() must be prevented from being used
> on the page immediately preceding a Private alias to a Shared
> page.
> 5. Actions that increasingly fracture the direct map must be avoided.
> Attestation may happen many times and repeated allocations that
> fracture the direct map have performance consequences.
> 6. A softer requirement: presuming that bounce buffers won't be used
> for TDX devices *forever*, it would be nice to use a mechanism that
> will continue to work on systems that don't have swiotlb on.
>
> I think we've talked about three different solutions:
>
> == vmalloc() ==
>
> So, let's say we used a relatively plain vmalloc(). That's great for
> #1->#3 as long as the vmalloc() mapping gets the "TDX Shared" bit set
> properly on its PTEs. But, it falls over for *either* #4 or #5. If it
> leaves the direct map alone, it's exposed to load_unaligned_zeropad().
> If it unmaps the memory from the direct map, it runs afoul of #5.
>
> == order-1 + vmap() ==
>
> Let's now consider a vmalloc() variant: allocate a bunch of order-1
> pages and vmap() page[1], leaving page[0] as a guard page against
> load_unaligned_zeropad() on the direct map. That works, but it's an
> annoying amount of code.
>
> == swiotlb pages ==
>
> Using the swiotlb bounce buffer pages is the other proposed option.
> They already have a working kernel mapping and have already been
> converted. They are mitigated against load_unaligned_zeropad(). They
> do cause direct map fracturing, but only once since they're allocated
> statically. They don't increasingly degrade things. It's a one-time
> cost. Their interaction with #6 is not great.
>
> Did I miss anything? Does that accurately capture where we are?

We can also reserve a dedicated CMA, but Kirill didn't like it.

--
Thanks,
-Kai


\
 
 \ /
  Last update: 2022-07-22 01:33    [W:0.212 / U:0.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site