lkml.org 
[lkml]   [2020]   [Mar]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3 2/2] irqchip/gic-v3-its: Balance initial LPI affinity across CPUs
On 2020-03-18 12:22, John Garry wrote:
> I may have an idea about this:
> irq 196, cpu list 0-31, effective list 82
>
> Just going back to comment on the code:
>
>> +/*
>> + * As suggested by Thomas Gleixner in:
>> + * https://lore.kernel.org/r/87h80q2aoc.fsf@nanos.tec.linutronix.de
>> + */
>> +static int its_select_cpu(struct irq_data *d,
>> + const struct cpumask *aff_mask)
>> +{
>> + struct its_device *its_dev = irq_data_get_irq_chip_data(d);
>> + cpumask_var_t tmpmask;
>> + int cpu, node;
>> +
>> + if (!alloc_cpumask_var(&tmpmask, GFP_KERNEL))
>> + return -ENOMEM;
>> +
>> + node = its_dev->its->numa_node;
>> +
>> + if (!irqd_affinity_is_managed(d)) {
>> + /* First try the NUMA node */
>> + if (node != NUMA_NO_NODE) {
>> + /*
>> + * Try the intersection of the affinity mask and the
>> + * node mask (and the online mask, just to be safe).
>> + */
>> + cpumask_and(tmpmask, cpumask_of_node(node), aff_mask);
>> + cpumask_and(tmpmask, tmpmask, cpu_online_mask);
>> +
>> + /* If that doesn't work, try the nodemask itself */
>
> So if tmpmsk is empty...

Which means the proposed affinity mask isn't part of the node mask the
first place.
Why did we get such an affinity the first place?

>
>> + if (cpumask_empty(tmpmask))
>> + cpumask_and(tmpmask, cpumask_of_node(node), cpu_online_mask);
>
> now the tmpmask may have no intersection with the aff_mask...

But it has the mask for CPUs that are best suited for this interrupt,
right?
If I understand the topology of your machine, it has an ITS per 64 CPUs,
and
this device is connected to the ITS that serves the second socket.

>
>> +
>> + cpu = cpumask_pick_least_loaded(d, tmpmask);
>> + if (cpu < nr_cpu_ids)
>> + goto out;
>> +
>> + /* If we can't cross sockets, give up */
>> + if ((its_dev->its->flags & ITS_FLAGS_WORKAROUND_CAVIUM_23144))
>> + goto out;
>> +
>> + /* If the above failed, expand the search */
>> + }
>
> SNIP
>
>> +out:
>> + free_cpumask_var(tmpmask);
>> +
>> + pr_debug("IRQ%d -> %*pbl CPU%d\n", d->irq,
>> cpumask_pr_args(aff_mask), cpu);
>> + return cpu;
>> +}
>> +
>> static int its_set_affinity(struct irq_data *d, const struct cpumask
>> *mask_val,
>> bool force)
>> {
>> - unsigned int cpu;
>> - const struct cpumask *cpu_mask = cpu_online_mask;
>> struct its_device *its_dev = irq_data_get_irq_chip_data(d);
>> struct its_collection *target_col;
>> u32 id = its_get_event_id(d);
>> + int cpu;
>> /* A forwarded interrupt should use irq_set_vcpu_affinity */
>> if (irqd_is_forwarded_to_vcpu(d))
>> return -EINVAL;
>> - /* lpi cannot be routed to a redistributor that is on a
>> foreign node */
>> - if (its_dev->its->flags & ITS_FLAGS_WORKAROUND_CAVIUM_23144) {
>> - if (its_dev->its->numa_node >= 0) {
>> - cpu_mask = cpumask_of_node(its_dev->its->numa_node);
>> - if (!cpumask_intersects(mask_val, cpu_mask))
>> - return -EINVAL;
>> - }
>> - }
>> -
>> - cpu = cpumask_any_and(mask_val, cpu_mask);
>> + if (!force)
>> + cpu = its_select_cpu(d, mask_val);
>> + else
>> + cpu = cpumask_pick_least_loaded(d, mask_val);
>> - if (cpu >= nr_cpu_ids)
>> + if (cpu < 0 || cpu >= nr_cpu_ids)
>> return -EINVAL;
>
> Annotate missing code:
>
> if (cpu < 0 || cpu >= nr_cpu_ids)
> return -EINVAL;
>
> if (cpu != its_dev->event_map.col_map[id]) {
> its_inc_lpi_count(d, cpu);
> its_dec_lpi_count(d, its_dev->event_map.col_map[id]);
> target_col = &its_dev->its->collections[cpu];
> its_send_movi(its_dev, target_col, id);
> its_dev->event_map.col_map[id] = cpu;
> irq_data_update_effective_affinity(d, cpumask_of(cpu));
> }
>
> So cpu may not be a member of mask_val. Hence the inconsistency of the
> affinity list and effective affinity. We could just drop the AND of
> the ITS node mask in its_select_cpu().

That would be a departure from the algorithm Thomas proposed, which made
a lot of sense in my opinion. What its_select_cpu() does in this case is
probably the best that can be achieved from a latency perspective,
as it keeps the interrupt local to the socket that generated it.

What I wonder is how we end-up with this silly aff_mask the first place.

> Anyway, I don't think that this should stop us testing.

Agreed.

M.
--
Jazz is not dead. It just smells funny...

\
 
 \ /
  Last update: 2020-03-18 15:05    [W:0.446 / U:0.116 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site