lkml.org 
[lkml]   [2022]   [Apr]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v3 13/21] x86/virt/tdx: Allocate and set up PAMTs for TDMRs
From
On 4/29/22 00:46, Kai Huang wrote:
> On Thu, 2022-04-28 at 10:12 -0700, Dave Hansen wrote:
>> This is also a good place to note the downsides of using
>> alloc_contig_pages().
>
> For instance:
>
> The allocation may fail when memory usage is under pressure.

It's not really memory pressure, though. The larger the allocation, the
more likely it is to fail. The more likely it is that the kernel can't
free the memory or that if you need 1GB of contiguous memory that
999.996MB gets freed, but there is one stubborn page left.

alloc_contig_pages() can and will fail. The only mitigation which is
guaranteed to avoid this is doing the allocation at boot. But, you're
not doing that to avoid wasting memory on every TDX system that doesn't
use TDX.

A *good* way (although not foolproof) is to launch a TDX VM early in
boot before memory gets fragmented or consumed. You might even want to
recommend this in the documentation.

>>> +/*
>>> + * Locate the NUMA node containing the start of the given TDMR's first
>>> + * RAM entry. The given TDMR may also cover memory in other NUMA nodes.
>>> + */
>>
>> Please add a sentence or two on the implications here of what this means
>> when it happens. Also, the joining of e820 regions seems like it might
>> span NUMA nodes. What prevents that code from just creating one large
>> e820 area that leads to one large TDMR and horrible NUMA affinity for
>> these structures?
>
> How about adding:
>
> When TDMR is created, it stops spanning at NUAM boundary.

I actually don't know what that means at all. I was thinking of
something like this.

/*
* Pick a NUMA node on which to allocate this TDMR's metadata.
*
* This is imprecise since TDMRs are 1GB aligned and NUMA nodes might
* not be. If the TDMR covers more than one node, just use the _first_
* one. This can lead to small areas of off-node metadata for some
* memory.
*/

>>> +static int tdmr_get_nid(struct tdmr_info *tdmr)
>>> +{
>>> + u64 start, end;
>>> + int i;
>>> +
>>> + /* Find the first RAM entry covered by the TDMR */

There's something else missing in here. Why not just do:

return phys_to_target_node(TDMR_START(tdmr));

This would explain it:

/*
* The beginning of the TDMR might not point to RAM.
* Find its first RAM address which which its node can
* be found.
*/

>>> + e820_for_each_mem(i, start, end)
>>> + if (end > TDMR_START(tdmr))
>>> + break;
>>
>> Brackets around the big loop, please.
>
> OK.
>
>>
>>> + /*
>>> + * One TDMR must cover at least one (or partial) RAM entry,
>>> + * otherwise it is kernel bug. WARN_ON() in this case.
>>> + */
>>> + if (WARN_ON_ONCE((start >= end) || start >= TDMR_END(tdmr)))
>>> + return 0;

This really means "no RAM found for this TDMR", right? Can we say that,
please.


>>> + /*
>>> + * Allocate one chunk of physically contiguous memory for all
>>> + * PAMTs. This helps minimize the PAMT's use of reserved areas
>>> + * in overlapped TDMRs.
>>> + */
>>
>> Ahh, this explains it. Considering that tdmr_get_pamt_sz() is really
>> just two lines of code, I'd probably just the helper and open-code it
>> here. Then you only have one place to comment on it.
>
> It has a loop and internally calls __tdmr_get_pamt_sz(). It looks doesn't fit
> if we open-code it here.
>
> How about move this comment to tdmr_get_pamt_sz()?

I thought about that. But tdmr_get_pamt_sz() isn't itself doing any
allocation so it doesn't make a whole lot of logical sense. This is a
place where a helper _can_ be removed. Remove it, please.

\
 
 \ /
  Last update: 2022-04-29 16:20    [W:1.256 / U:0.300 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site