lkml.org 
[lkml]   [2022]   [Oct]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [RFC PATCH v3 4/7] bus/cdx: add cdx-MSI domain with gic-its domain as parent
Date
[AMD Official Use Only - General]



> -----Original Message-----
> From: Jason Gunthorpe <jgg@ziepe.ca>
> Sent: 13 October 2022 13:43
> To: Radovanovic, Aleksandar <aleksandar.radovanovic@amd.com>
> Cc: Gupta, Nipun <Nipun.Gupta@amd.com>; Marc Zyngier
> <maz@kernel.org>; Robin Murphy <robin.murphy@arm.com>;
> robh+dt@kernel.org; krzysztof.kozlowski+dt@linaro.org;
> gregkh@linuxfoundation.org; rafael@kernel.org; eric.auger@redhat.com;
> alex.williamson@redhat.com; cohuck@redhat.com; Gupta, Puneet (DCG-
> ENG) <puneet.gupta@amd.com>; song.bao.hua@hisilicon.com;
> mchehab+huawei@kernel.org; f.fainelli@gmail.com;
> jeffrey.l.hugo@gmail.com; saravanak@google.com;
> Michael.Srba@seznam.cz; mani@kernel.org; yishaih@nvidia.com;
> will@kernel.org; joro@8bytes.org; masahiroy@kernel.org;
> ndesaulniers@google.com; linux-arm-kernel@lists.infradead.org; linux-
> kbuild@vger.kernel.org; linux-kernel@vger.kernel.org;
> devicetree@vger.kernel.org; kvm@vger.kernel.org; okaya@kernel.org;
> Anand, Harpreet <harpreet.anand@amd.com>; Agarwal, Nikhil
> <nikhil.agarwal@amd.com>; Simek, Michal <michal.simek@amd.com>; git
> (AMD-Xilinx) <git@amd.com>
> Subject: Re: [RFC PATCH v3 4/7] bus/cdx: add cdx-MSI domain with gic-its
> domain as parent
>
> Caution: This message originated from an External Source. Use proper
> caution when opening attachments, clicking links, or responding.
>
>
> On Wed, Oct 12, 2022 at 03:09:26PM +0000, Radovanovic, Aleksandar wrote:
>
> > > On Wed, Oct 12, 2022 at 01:37:54PM +0000, Radovanovic, Aleksandar
> wrote:
> > > > > On Wed, Oct 12, 2022 at 10:34:23AM +0000, Radovanovic,
> > > > > Aleksandar
> > > wrote:
> > > > >
> > > > >
> > > > > > As for GITS_TRANSLATER, we can take up to 4 different IOVAs,
> > > > > > which limits us to 4 CDX devices (should be sufficient for
> > > > > > current HW use-cases). Also, it means that the address part
> > > > > > must be the same for all vectors within a single CDX device.
> > > > > > I'm assuming this is OK as it is going to be a single interrupt and
> IOMMU domain anyway.
> > > > >
> > > > > This is not at all how MSI is supposed to work.
> > > >
> > > > In the general case, no, they're not.
> > >
> > > I don't mean that you can hack this to work - I mean that in MSI the
> > > addr/data is supposed to come from the end point itself, not from
> > > some kind of shared structure. This is important because the actual
> > > act of generating the write has to be coherent with the DMA the
> > > device is doing, as the MSI write must push any DMA data to
> > > visibility to meet the "producer / consumer" model.
> > >
> >
> > I'm not sure I follow your argument, the limitation here is that the
> > MSI address value is shared between vectors of the same device
> > (requester id or endpoint, whichever way you prefer to call it), not
> > between devices.
>
> That isn't what you said, you said "we can take up to 4 different IOVAs, which
> limits us to 4 CDX devices" - which sounds like HW being shared across
> devices??

And that still does not imply lack of ordering or sharing of MSI target addresses between devices.

This is a highly programmable IP block, at the core of which is an interconnect interfacing to programmable logic (PL), a number of PCIe controllers (either endpoint or root-port), DMA engines, offload engines, the embedded processor subsystem (PSX), etc. DMA and interrupts can be routed across it in almost any (meaningful) direction. The datapath 'endpoints' request DMA and interrupts, but don't concern themselves with the mechanics of delivering that in the target domain. It is the responsibility of the egress bridges to the target domains to convert the interconnect interrupt transactions to whatever the interrupt delivery mechanism for that domain is. E.g. for PCIe controllers in endpoint mode, that would be through PCIe MSI-X tables internal to the controller (and managed by the PCIe host), for PSX that would be the PSX bridge (partially managed by the PSX OS, mediated through firmware, i.e. through CDX bus driver) and so on. It is the responsibility of the interconnect to maintain transaction ordering (including DMA vs. interrupts). It is the responsibility of the firmware to manage the bridges according to the implemented use-case, so everything works as expected.

The CDX bus driver manages a single aspect of this and that is endpoints implemented in PL/engines, targeting the PSX.

So, yes, the hardware that translates interrupt transactions to GIC AXI writes is shared between endpoints, but what I said above still applies. And that doesn't necessarily make it weird/wrong, it's just more complex than you might think.

Anyway, I think we're straying off topic here, none of this is visible to the kernel anyway. The question that we still need to answer is, are you OK with the limitations I listed originally?

Thanks,
Aleksandar

\
 
 \ /
  Last update: 2022-10-14 13:19    [W:0.196 / U:0.180 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site