lkml.org 
[lkml]   [2022]   [Jun]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v8 2/5] x86/tdx: Add TDX Guest event notify interrupt support
From
Hi,

+ Jun

On 6/20/22 5:33 AM, Kai Huang wrote:
> On Wed, 2022-06-08 at 19:52 -0700, Kuppuswamy Sathyanarayanan wrote:
>> Host-guest event notification via configured interrupt vector is useful
>> in cases where a guest makes an asynchronous request and needs a
>> callback from the host to indicate the completion or to let the host
>> notify the guest about events like device removal. One usage example is,
>> callback requirement of GetQuote asynchronous hypercall.
>
> Although this paragraph is from GHCI spec, IMHO it is not very helpful. In
> fact, I think this paragraph is not that right and should be removed from GHCI.
> The reason is such event notification from VMM in cases like "device removal" is
> too vague. There's no _specification_ in GHCI around which "device removal"
> should VMM inject such event. For instance, I _think_ the Qemu enumerated ACPI-
> based hotplug should continue to work in TD.

Yes. It just says that it *can* be used to signal a device removal. It is just
an example for where it can be used. But I agree that such a use case is vague.
If it makes it better, I am fine with removing it.

Copied from sec 3.5 TDG.VP.VMCALL<SetupEventNotifyInterrupt>:

"Example of an operation that can use the event notify is the host
VMM signaling a device removal to the TD, in response to which a TD may
unload a device driver."

>
> That being said, if a TD has multiple devices, it cannot know whether the VMM
> will inject the removal event via the vector set by SetupEventNotifyInterrupt.
> And for the same device in the same TD, different VMMs may use different way to
> notify its removal.

As per current design, If it is used for device removal, I think all registered
device drivers will get the notification and the individual device driver has
to check whether it is applicable for them.

If the SetupEventNotifyInterrupt TDVMCALL specification is extended to specify
the exact device or use case detail, then it can optimize the implementation.

>
> It seems GetQuote is the only user of SetupEventNotifyInterrupt. Maybe we
> should just declare it is for GetQuote.

Ok.

>
> Isaku, what do you think? Does this make sense?
>
>>
>> In TDX guest, SetupEventNotifyInterrupt hypercall can be used by the
>> guest to specify which interrupt vector to use as an event-notify
>> vector to the VMM. Details about the SetupEventNotifyInterrupt
>> hypercall can be found in TDX Guest-Host Communication Interface
>> (GHCI) Specification, sec 3.5 "VP.VMCALL<SetupEventNotifyInterrupt>".
>> Add a tdx_hcall_set_notify_intr() helper function to implement the
>> SetupEventNotifyInterrupt hypercall.
>
> As you also used "can" above, the GHCI only says the VMM _CAN_ inject the vector
> set by SetupEventNotifyInterrupt, but not must (3.3 TDG.VP.VMCALL<GetQuote>).
> This means theoretically TD should implement pooling mode in case VMM doesn't
> support injecting event via vector done by SetupEventNotifyInterrupt?

Yes. But GetQuote specification does not talk about the pooling mode
use case as well. So I think it is just a wording confusion.

>
> Perhaps we should update the GHCI spec to use must..

Ok.

>
>>
>> Reserve 0xec IRQ vector address for TDX guest to receive the event
>> completion notification from VMM. Also add related IDT handler to
>> process the notification event.
>
> Here lacks why we need to choose to reserve a system vector. For instance, why
> we cannot choose to use device IRQ way which only requires one vector on one

As you have explained below, as per current spec, it just expects a system
vector.

> cpu. As you can see reserving a system vector isn't ideal especially for
> attestation as it is not a frequent operation. It is wasteful of using IRQ

I agree that event notification is currently only used for attestation. But I
think in future there could be other use cases for it. If the intention is just
to use it for attestation, then we can just modify the GetQuote TDVMCALL to pass
the vector address, and there is no need for new TDVMCALL. I think the intention
here is to have generic method for VMM to notify TD about some events. I am not
clear about the possible future use cases, so I cannot comment on frequency of
its use.

Jun, any comments?



> resource especially on server systems with a lot of CPUs.

FWIW, this reservation is protected with CONFIG_INTEL_TDX_GUEST. So it will be
reserved only for TDX use case.


>
> The reason is SetupEventNotifyInterrupt TDVMCALL only has one argument, which is
> vector, but cannot specify which CPU that the VMM should inject the event to.
> The GHCI spec doesn't say which CPU the VMM should inject to (i.e. must inject
> to the CPU on which SetupEventNotifyInterrupt is called), so we can only assume
> VMM can inject to any CPU.
>
> Btw, x86 maintainers,
>
> I'd like to check with you to see whether we should improve the existing
> SetupEventNotifyInterrupt so we can choose to use request_irq() style for
> attestation. Using request_irq() means we don't need to reserve a system
> vector, but can allocate a vector dynamically when needed.
>
> Assuming we update SetupEventNotifyInterrupt to also allow TD to specify which
> CPU (i.e. via APICID) to inject (along with the vector), my understanding is we
> can use below way (idea only) to dynamically allocate a vector on one CPU when
> attestation is needed:
>
>
> int cpu, vector;
> int irq;
>
> // request an IRQ, and prevent it from being migrated
> irq = __irq_domain_alloc_irqs(x86_vector_domain, 0, 1, ...);
> request_irq(irq, ...);
>
> // get vector, cpu from irq
>
> TDVMCALL<SetupEventNotifyInterrupt>(vector, 
> apic->cpu_present_to_apidid(cpu));
>
> Is this reasonable? If yes, is it worth to do?
>

--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

\
 
 \ /
  Last update: 2022-06-20 17:45    [W:0.097 / U:1.260 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site