lkml.org 
[lkml]   [2020]   [Oct]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[tip: irq/core] genirq: Add fasteoi IPI flow
The following commit has been merged into the irq/core branch of tip:

Commit-ID: c5e5ec033c4ab25c53f1fd217849e75deb0bf7bf
Gitweb: https://git.kernel.org/tip/c5e5ec033c4ab25c53f1fd217849e75deb0bf7bf
Author: Marc Zyngier <maz@kernel.org>
AuthorDate: Tue, 19 May 2020 10:41:00 +01:00
Committer: Marc Zyngier <maz@kernel.org>
CommitterDate: Sun, 13 Sep 2020 17:04:38 +01:00

genirq: Add fasteoi IPI flow

For irqchips using the fasteoi flow, IPIs are a bit special.
They need to be EOI'd early (before calling the handler), as
funny things may happen in the handler (they do not necessarily
behave like a normal interrupt).

Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
include/linux/irq.h | 1 +
kernel/irq/chip.c | 27 +++++++++++++++++++++++++++
2 files changed, 28 insertions(+)

diff --git a/include/linux/irq.h b/include/linux/irq.h
index 1b7f4df..57205bb 100644
--- a/include/linux/irq.h
+++ b/include/linux/irq.h
@@ -634,6 +634,7 @@ static inline int irq_set_parent(int irq, int parent_irq)
*/
extern void handle_level_irq(struct irq_desc *desc);
extern void handle_fasteoi_irq(struct irq_desc *desc);
+extern void handle_percpu_devid_fasteoi_ipi(struct irq_desc *desc);
extern void handle_edge_irq(struct irq_desc *desc);
extern void handle_edge_eoi_irq(struct irq_desc *desc);
extern void handle_simple_irq(struct irq_desc *desc);
diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
index 857f5f4..bed517d 100644
--- a/kernel/irq/chip.c
+++ b/kernel/irq/chip.c
@@ -945,6 +945,33 @@ void handle_percpu_devid_irq(struct irq_desc *desc)
}

/**
+ * handle_percpu_devid_fasteoi_ipi - Per CPU local IPI handler with per cpu
+ * dev ids
+ * @desc: the interrupt description structure for this irq
+ *
+ * The biggest difference with the IRQ version is that the interrupt is
+ * EOIed early, as the IPI could result in a context switch, and we need to
+ * make sure the IPI can fire again. We also assume that the arch code has
+ * registered an action. If not, we are positively doomed.
+ */
+void handle_percpu_devid_fasteoi_ipi(struct irq_desc *desc)
+{
+ struct irq_chip *chip = irq_desc_get_chip(desc);
+ struct irqaction *action = desc->action;
+ unsigned int irq = irq_desc_get_irq(desc);
+ irqreturn_t res;
+
+ __kstat_incr_irqs_this_cpu(desc);
+
+ if (chip->irq_eoi)
+ chip->irq_eoi(&desc->irq_data);
+
+ trace_irq_handler_entry(irq, action);
+ res = action->handler(irq, raw_cpu_ptr(action->percpu_dev_id));
+ trace_irq_handler_exit(irq, action, res);
+}
+
+/**
* handle_percpu_devid_fasteoi_nmi - Per CPU local NMI handler with per cpu
* dev ids
* @desc: the interrupt description structure for this irq
\
 
 \ /
  Last update: 2020-10-11 19:59    [W:0.037 / U:0.084 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site