lkml.org 
[lkml]   [2020]   [Dec]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH RFC 10/39] KVM: x86/xen: support upcall vector
From
Date
On 2020-12-02 11:02 a.m., David Woodhouse wrote:
> On Wed, 2020-12-02 at 18:34 +0000, Joao Martins wrote:
>> On 12/2/20 4:47 PM, David Woodhouse wrote:
>>> On Wed, 2020-12-02 at 13:12 +0000, Joao Martins wrote:
>>>> On 12/2/20 11:17 AM, David Woodhouse wrote:
>>>>> I might be more inclined to go for a model where the kernel handles the
>>>>> evtchn_pending/evtchn_mask for us. What would go into the irq routing
>>>>> table is { vcpu, port# } which get passed to kvm_xen_evtchn_send().
>>>>
>>>> But passing port to the routing and handling the sending of events wouldn't it lead to
>>>> unnecessary handling of event channels which aren't handled by the kernel, compared to
>>>> just injecting caring about the upcall?
>>>
>>> Well, I'm generally in favour of *not* doing things in the kernel that
>>> don't need to be there.
>>>
>>> But if the kernel is going to short-circuit the IPIs and VIRQs, then
>>> it's already going to have to handle the evtchn_pending/evtchn_mask
>>> bitmaps, and actually injecting interrupts.
>>>
>>
>> Right. I was trying to point that out in the discussion we had
>> in next patch. But true be told, more about touting the idea of kernel
>> knowing if a given event channel is registered for userspace handling,
>> rather than fully handling the event channel.
>>
>> I suppose we are able to provide both options to the VMM anyway
>> i.e. 1) letting them handle it enterily in userspace by intercepting
>> EVTCHNOP_send, or through the irq route if we want kernel to offload it.
>
> Right. The kernel takes what it knows about and anything else goes up
> to userspace.
>
> I do like the way you've handled the vcpu binding in userspace, and the
> kernel just knows that a given port goes to a given target CPU.
>
>>
>>> For the VMM
>>> API I think we should follow the Xen model, mixing the domain-wide and
>>> per-vCPU configuration. It's the best way to faithfully model the
>>> behaviour a true Xen guest would experience.
>>>
>>> So KVM_XEN_ATTR_TYPE_CALLBACK_VIA can be used to set one of
>>> • HVMIRQ_callback_vector, taking a vector#
>>> • HVMIRQ_callback_gsi for the in-kernel irqchip, taking a GSI#
>>>
>>> And *maybe* in a later patch it could also handle
>>> • HVMIRQ_callback_gsi for split-irqchip, taking an eventfd
>>> • HVMIRQ_callback_pci_intx, taking an eventfd (or a pair, for EOI?)
>>>
>>
>> Most of the Xen versions we were caring had callback_vector and
>> vcpu callback vector (despite Linux not using the latter). But if you're
>> dating back to 3.2 and 4.1 well (or certain Windows drivers), I suppose
>> gsi and pci-intx are must-haves.
>
> Note sure about GSI but PCI-INTX is definitely something I've seen in
> active use by customers recently. I think SLES10 will use that.
>
>> I feel we could just accommodate it as subtype in KVM_XEN_ATTR_TYPE_CALLBACK_VIA.
>> Don't see the adavantage in having another xen attr type.
>
> Yeah, fair enough.
>
>> But kinda have mixed feelings in having kernel handling all event channels ABI,
>> as opposed to only the ones userspace asked to offload. It looks a tad unncessary besides
>> the added gain to VMMs that don't need to care about how the internals of event channels.
>> But performance-wise it wouldn't bring anything better. But maybe, the former is reason
>> enough to consider it.
>
> Yeah, we'll see. Especially when it comes to implementing FIFO event
> channels, I'd rather just do it in one place — and if the kernel does
> it anyway then it's hardly difficult to hook into that.

Sorry I'm late to this conversation. Not a whole lot to add to what Joao
said. I would only differ with him on how much to offload.

Given that we need the fast path in the kernel anyway, I think it's simpler
to do all the event-channel bitmap only in the kernel.
This would also simplify using the kernel Xen drivers if someone eventually
decides to use them.


Ankur

>
> But I've been about as coherent as I can be in email, and I think we're
> generally aligned on the direction. I'll do some more experiments and
> see what I can get working, and what it looks like.
>
> I'm focusing on making the shinfo stuff all use kvm_map_gfn() first.
>

\
 
 \ /
  Last update: 2020-12-03 02:11    [W:0.239 / U:0.196 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site