lkml.org 
[lkml]   [2020]   [Apr]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 5/6] irqchip: Add Loongson PCH MSI controller
On 2020-04-24 02:33, Jiaxun Yang wrote:
> On Thu, 23 Apr 2020 15:41:35 +0100
> Marc Zyngier <maz@kernel.org> wrote:
>
>> On Wed, 22 Apr 2020 22:24:25 +0800
>> Jiaxun Yang <jiaxun.yang@flygoat.com> wrote:
>>
>> > This controller appears on Loongson-7A family of PCH to transform
>> > interrupts from PCI MSI into HyperTransport vectorized interrrupts
>> > and send them to procrssor's HT vector controller.
>> >
>> > Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
>> > ---
> [...]
>> > + ret = irq_domain_alloc_irqs_parent(domain, virq, 1,
>> > &fwspec);
>> > + if (ret)
>> > + return ret;
>> > +
>> > + irq_domain_set_info(domain, virq, hwirq,
>> > + &middle_irq_chip, NULL,
>> > + handle_simple_irq, NULL, NULL);
>>
>> No, this should at least be handle_edge_irq. More importantly, you
>> should use the flow set by the underlying irqchip, and not provide
>> your own.
>
> Hi Marc,
> Thanks for your review.
>
> The underlying irqchip (HTVEC) follows a simple_irq flow as it only
> has mask/unmask callback, and it doesn't have tyoe configuration. so I
> followed simple_irq flow.

Not having a type doesn't mean that it can stick to simple_irq, which is
the wrong things to use in 99% of the HW cases.

> How can I use the flow set by the underlying irqchip? Just use
> irq_domain_set_hwirq_and_chip here and set_handler in HTVEC driver?

You need to find out how the HTVEC behaves. My gut feeling is that it
is itself edge triggered, but you would need to look in the
documentation
to find out.

>
>
>>
>> > + irq_set_probe(virq);
>>
>> Probe? what does it mean for MSIs? I also don't see how you tell the
>> underlying irqchip that the MSI is edge triggered.
>>
>> > +
>> > + return 0;
>> > +}
>> > +
>> > +static int pch_msi_middle_domain_alloc(struct irq_domain *domain,
>> > + unsigned int virq,
>> > + unsigned int nr_irqs,
>> > void *args) +{
>> > + struct pch_msi_data *priv = domain->host_data;
>> > + int hwirq, err, i;
>> > +
>> > + hwirq = pch_msi_allocate_hwirq(priv, nr_irqs);
>> > + if (hwirq < 0)
>> > + return hwirq;
>> > +
>> > + for (i = 0; i < nr_irqs; i++) {
>> > + err = pch_msi_parent_domain_alloc(domain, virq +
>> > i, hwirq + i);
>> > + if (err)
>> > + goto err_hwirq;
>> > +
>> > + irq_domain_set_hwirq_and_chip(domain, virq + i,
>> > hwirq + i,
>> > + &middle_irq_chip,
>> > priv);
>> > + }
>> > +
>> > + return 0;
>> > +err_hwirq:
>> > + while (--i >= 0)
>> > + irq_domain_free_irqs_parent(domain, virq, i);
>> > +
>> > + pch_msi_free_hwirq(priv, hwirq, nr_irqs);
>> > + return err;
>> > +}
>> > +
>> > +static void pch_msi_middle_domain_free(struct irq_domain *domain,
>> > + unsigned int virq,
>> > + unsigned int nr_irqs)
>> > +{
>> > + struct irq_data *d = irq_domain_get_irq_data(domain, virq);
>> > + struct pch_msi_data *priv = irq_data_get_irq_chip_data(d);
>> > +
>> > + irq_domain_free_irqs_parent(domain, virq, nr_irqs);
>> > + pch_msi_free_hwirq(priv, d->hwirq, nr_irqs);
>> > +}
>> > +
>> > +static const struct irq_domain_ops pch_msi_middle_domain_ops = {
>> > + .alloc = pch_msi_middle_domain_alloc,
>> > + .free = pch_msi_middle_domain_free,
>> > +};
>> > +
>> > +static int pch_msi_init_domains(struct pch_msi_data *priv,
>> > + struct device_node *node,
>> > + struct device_node *parent)
>> > +{
>> > + struct irq_domain *middle_domain, *msi_domain,
>> > *parent_domain; +
>> > + parent_domain = irq_find_host(parent);
>> > + if (!parent_domain) {
>> > + pr_err("Failed to find the parent domain\n");
>> > + return -ENXIO;
>> > + }
>> > +
>> > + middle_domain = irq_domain_add_tree(NULL,
>> > +
>> > &pch_msi_middle_domain_ops,
>> > + priv);
>>
>> You don't really need a tree, unless your interrupt space is huge and
>> very sparse. Given that the DT example says 64, I would go with a
>> linear domain if that number is realistic.
>>
> It can up to 192 in latest generation of chip, is it still suitable?

That's a pretty small number, really. Just stick to a linear domain.

> In the latest generation, we have a enhanced version of HTVEC which has
> another delivery system that will be able to configure affinity. That's
> why I placed set_affinity call back here and in PCH PIC driver.

Once you add support for this new version, this will make sense. at the
moment, this is pretty pointless.

Thanks,

M.
--
Jazz is not dead. It just smells funny...

\
 
 \ /
  Last update: 2020-04-24 10:28    [W:0.074 / U:0.412 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site