lkml.org 
[lkml]   [2020]   [Sep]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3 08/16] irqchip/gic: Configure SGIs as standard interrupts
On 2020-09-16 16:58, Jon Hunter wrote:
> On 16/09/2020 16:55, Marc Zyngier wrote:
>> On 2020-09-16 16:46, Jon Hunter wrote:
>>> On 16/09/2020 16:10, Marc Zyngier wrote:
>>>> Hi Jon,
>>>>
>>>> +Linus, who is facing a similar issue.
>>>>
>>>> On 2020-09-16 15:16, Jon Hunter wrote:
>>>>> Hi Marc,
>>>>>
>>>>> On 14/09/2020 14:06, Marek Szyprowski wrote:
>>>>>> Hi Marc,
>>>>>>
>>>>>> On 01.09.2020 16:43, Marc Zyngier wrote:
>>>>>>> Change the way we deal with GIC SGIs by turning them into proper
>>>>>>> IRQs, and calling into the arch code to register the interrupt
>>>>>>> range
>>>>>>> instead of a callback.
>>>>>>>
>>>>>>> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
>>>>>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>>>>> This patch landed in linux next-20200914 as commit ac063232d4b0
>>>>>> ("irqchip/gic: Configure SGIs as standard interrupts"). Sadly it
>>>>>> breaks
>>>>>> booting of all Samsung Exynos 4210/4412 based boards (dual/quad
>>>>>> ARM
>>>>>> Cortex A9 based). Here are the last lines from the bootlog:
>>>>>
>>>>> I am observing the same thing on several Tegra boards (both arm and
>>>>> arm64). Bisect is pointing to this commit. Reverting this alone
>>>>> does
>>>>> not
>>>>> appear to be enough to fix the issue.
>>>>
>>>> Right, I am just massively by the GICv3 spec, and failed to remember
>>>> that ye olde GIC exposes the source CPU in AIR *and* wants it back,
>>>> while
>>>> newer GICs deal with that transparently.
>>>>
>>>> Can you try the patch below and let me know?
>>>
>>> Yes will do.
>>>
>>>> @@ -365,14 +354,13 @@ static void __exception_irq_entry
>>>> gic_handle_irq(struct pt_regs *regs)
>>>>              smp_rmb();
>>>>
>>>>              /*
>>>> -             * Samsung's funky GIC encodes the source CPU in
>>>> -             * GICC_IAR, leading to the deactivation to fail if
>>>> -             * not written back as is to GICC_EOI.  Stash the
>>>> -             * INTID away for gic_eoi_irq() to write back.
>>>> -             * This only works because we don't nest SGIs...
>>>> +             * The GIC encodes the source CPU in GICC_IAR,
>>>> +             * leading to the deactivation to fail if not
>>>> +             * written back as is to GICC_EOI.  Stash the INTID
>>>> +             * away for gic_eoi_irq() to write back.  This only
>>>> +             * works because we don't nest SGIs...
>>>>               */
>>>> -            if (is_frankengic())
>>>> -                set_sgi_intid(irqstat);
>>>> +            this_cpu_write(sgi_intid, intid);
>>>
>>> I assume that it should be irqstat here and not intid?
>>
>> Indeed. As you can tell, I haven't even tried to compile it, sorry
>> about
>> that.
>
> No worries, I got the gist. However, even with this change, it still
> does not boot :-(

Do you boot form EL2? If so, you'd also need this:

static void gic_eoimode1_eoi_irq(struct irq_data *d)
{
+ u32 hwirq = gic_irq(d);
+
/* Do not deactivate an IRQ forwarded to a vcpu. */
if (irqd_is_forwarded_to_vcpu(d))
return;

+ if (hwirq < 16)
+ hwirq = this_cpu_read(sgi_intid);
+
writel_relaxed(gic_irq(d), gic_cpu_base(d) + GIC_CPU_DEACTIVATE);
}

If none of that works, we'll need some additional traces. On the other
hand, I just booted this on a GICv2-based system, and it worked fine...

M.
--
Jazz is not dead. It just smells funny...

\
 
 \ /
  Last update: 2020-09-16 22:47    [W:0.162 / U:2.360 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site