lkml.org 
[lkml]   [2022]   [Nov]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v6 02/21] dt-bindings: Add binding for gunyah hypervisor
On Wed, Nov 2, 2022 at 1:06 PM Elliot Berman <quic_eberman@quicinc.com> wrote:
>
> Hi Jassi,
>
> On 11/1/2022 7:01 PM, Jassi Brar wrote:
> > On Tue, Nov 1, 2022 at 7:12 PM Elliot Berman <quic_eberman@quicinc.com> wrote:
> >>
> >>
> >>
> >> On 11/1/2022 2:58 PM, Jassi Brar wrote:
> >>> On Tue, Nov 1, 2022 at 3:35 PM Elliot Berman <quic_eberman@quicinc.com> wrote:
> >>>>
> >>>>
> >>>>
> >>>> On 11/1/2022 9:23 AM, Jassi Brar wrote:
> >>>>> On Mon, Oct 31, 2022 at 10:20 PM Elliot Berman <quic_eberman@quicinc.com> wrote:
> >>>>>>
> >>>>>> Hi Jassi,
> >>>>>>
> >>>>>> On 10/27/2022 7:33 PM, Jassi Brar wrote:
> >>>>>> > On Wed, Oct 26, 2022 at 1:59 PM Elliot Berman
> >>>>>> <quic_eberman@quicinc.com> wrote:
> >>>>>> > .....
> >>>>>> >> +
> >>>>>> >> + gunyah-resource-mgr@0 {
> >>>>>> >> + compatible = "gunyah-resource-manager-1-0",
> >>>>>> "gunyah-resource-manager";
> >>>>>> >> + interrupts = <GIC_SPI 3 IRQ_TYPE_EDGE_RISING>, /* TX
> >>>>>> full IRQ */
> >>>>>> >> + <GIC_SPI 4 IRQ_TYPE_EDGE_RISING>; /* RX
> >>>>>> empty IRQ */
> >>>>>> >> + reg = <0x00000000 0x00000000>, <0x00000000 0x00000001>;
> >>>>>> >> + /* TX, RX cap ids */
> >>>>>> >> + };
> >>>>>> >>
> >>>>>> > All these resources are used only by the mailbox controller driver.
> >>>>>> > So, this should be the mailbox controller node, rather than the
> >>>>>> > mailbox user.> One option is to load gunyah-resource-manager as a
> >>>>>> module that relies
> >>>>>> > on the gunyah-mailbox provider. That would also avoid the "Allow
> >>>>>> > direct registration to a channel" hack patch.
> >>>>>>
> >>>>>> A message queue to another guest VM wouldn't be known at boot time and
> >>>>>> thus couldn't be described on the devicetree.
> >>>>>>
> >>>>> I think you need to implement of_xlate() ... or please tell me what
> >>>>> exactly you need to specify in the dt.
> >>>>
> >>>> Dynamically created virtual machines can't be known on the dt, so there
> >>>> is nothing to specify in the DT. There couldn't be a devicetree node for
> >>>> the message queue client because that client is only exists once the VM
> >>>> is created by userspace.
> >>>>
> >>> The underlying "physical channel" is the synchronous SMC instruction,
> >>> which remains 1 irrespective of the number of mailbox instances
> >>> created.
> >>
> >> I disagree that the physical channel is the SMC instruction. Regardless
> >> though, there are num_online_cpus() "physical channels" with this
> >> perspective.
> >>
> >>> So basically you are sharing one resource among users. Why doesn't the
> >>> RM request the "smc instruction" channel once and share it among
> >>> users?
> >>
> >> I suppose in this scenario, a single mailbox channel would represent all
> >> message queues? This would cause Linux to serialize *all* message queue
> >> hypercalls. Sorry, I can only think negative implications.
> >>
> >> Error handling needs to move into clients: if a TX message queue becomes
> >> full or an RX message queue becomes empty, then we'll need to return
> >> error back to the client right away. The clients would need to register
> >> for the RTS/RTR interrupts to know when to send/receive messages and
> >> have retry error handling. If the mailbox controller retried for the
> >> clients as currently proposed, then we could get into a scenario where a
> >> message queue could never be ready to send/receive and thus stuck
> >> forever trying to process that message. The effect here would be that
> >> the mailbox controller becomes a wrapper to some SMC instructions that
> >> aren't related at the SMC instruction level.
> >>
> >> A single channel would limit performance of SMP systems because only one
> >> core could send/receive a message. There is no such limitation for
> >> message queues to behave like this.
> >>
> > This is just an illusion. If Gunyah can handle multiple calls from a
> > VM parallely, even with the "bind-client-to-channel" hack you can't
> > make sure different channels run on different cpu cores. If you are
> > ok with that, you could simply populate a mailbox controller with N
> > channels and allocate them in any order the clients ask.
>
> I wanted to make sure I understood the ask here completely. On what
> basis is N chosen? Who would be the mailbox clients?
>
A channel structure is cheap, so any number that is not likely to run
out. Say you have 10 possible users in a VM, set N=16. I know ideally
it should be precise and flexible but the gain in simplicity makes the
trade-off very acceptable.

thanks.

\
 
 \ /
  Last update: 2022-11-02 19:25    [W:0.073 / U:0.144 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site