lkml.org 
[lkml]   [2021]   [Nov]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v4 2/2] arm64: PCI: hv: Add support for Hyper-V vPCI
On Tue, 09 Nov 2021 22:14:20 +0000,
Sunil Muthuswamy <sunilmut@linux.microsoft.com> wrote:
>
> From: Sunil Muthuswamy <sunilmut@microsoft.com>
>
> Add support for Hyper-V vPCI for arm64 by implementing the arch specific
> interfaces. Introduce an IRQ domain and chip specific to Hyper-v vPCI that
> is based on SPIs. The IRQ domain parents itself to the arch GIC IRQ domain
> for basic vector management.
>
> Signed-off-by: Sunil Muthuswamy <sunilmut@microsoft.com>
> ---
> In v2, v3 & v4:
> Changes are described in the cover letter.
>
> arch/arm64/include/asm/hyperv-tlfs.h | 9 ++
> drivers/pci/Kconfig | 2 +-
> drivers/pci/controller/Kconfig | 2 +-
> drivers/pci/controller/pci-hyperv.c | 207 ++++++++++++++++++++++++++-
> 4 files changed, 217 insertions(+), 3 deletions(-)

[...]

> +static int hv_pci_vec_irq_domain_activate(struct irq_domain *domain,
> + struct irq_data *irqd, bool reserve)
> +{
> + static int cpu;
> +
> + /*
> + * Pick a cpu using round-robin as the irq affinity that can be
> + * temporarily used for composing MSI from the hypervisor. GIC
> + * will eventually set the right affinity for the irq and the
> + * 'unmask' will retarget the interrupt to that cpu.
> + */
> + if (cpu >= cpumask_last(cpu_online_mask))
> + cpu = 0;
> + cpu = cpumask_next(cpu, cpu_online_mask);
> + irq_data_update_effective_affinity(irqd, cpumask_of(cpu));

The mind boggles.

Let's imagine a single machine. cpu_online_mask only has bit 0 set,
and nr_cpumask_bits is 1. This is the first run, and cpu is 1:

cpu = cpumask_next(cpu, cpu_online_mask);

cpu is now set to 1. Which is not a valid CPU number, but a valid
return value indicating that there is no next CPU as it is equal to
nr_cpumask_bits. cpumask_of(cpu) will then diligently return crap,
which you carefully store into the irq descriptor. The IRQ subsystem
thanks you.

The same reasoning applies to any number of CPUs, and you obviously
never checked what any of this does :-(. As to what the behaviour is
when multiple CPUs run this function in parallel, let's not even
bother (locking is overrated).

Logic and concurrency issues aside, why do you even bother setting
some round-robin affinity if all you need is to set *something* so
that a hypervisor message can be composed? Why not use the first
online CPU? At least it will be correct.

M.

--
Without deviation from the norm, progress is not possible.

\
 
 \ /
  Last update: 2021-11-10 14:22    [W:0.065 / U:0.072 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site