lkml.org 
[lkml]   [2020]   [Jun]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs
From
Date

On 6/25/20 6:34 PM, Nitesh Narayan Lal wrote:
> From: Alex Belits <abelits@marvell.com>
>
> The current implementation of cpumask_local_spread() does not respect the
> isolated CPUs, i.e., even if a CPU has been isolated for Real-Time task,
> it will return it to the caller for pinning of its IRQ threads. Having
> these unwanted IRQ threads on an isolated CPU adds up to a latency
> overhead.
>
> Restrict the CPUs that are returned for spreading IRQs only to the
> available housekeeping CPUs.
>
> Signed-off-by: Alex Belits <abelits@marvell.com>
> Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com>

Hi Peter,

I just realized that Yuqi jin's patch [1] that modifies cpumask_local_spread is
lying in linux-next.
Should I do a re-post by re-basing the patches on the top of linux-next?

[1]
https://lore.kernel.org/lkml/1582768688-2314-1-git-send-email-zhangshaokun@hisilicon.com/

> ---
> lib/cpumask.c | 16 +++++++++++-----
> 1 file changed, 11 insertions(+), 5 deletions(-)
>
> diff --git a/lib/cpumask.c b/lib/cpumask.c
> index fb22fb266f93..85da6ab4fbb5 100644
> --- a/lib/cpumask.c
> +++ b/lib/cpumask.c
> @@ -6,6 +6,7 @@
> #include <linux/export.h>
> #include <linux/memblock.h>
> #include <linux/numa.h>
> +#include <linux/sched/isolation.h>
>
> /**
> * cpumask_next - get the next cpu in a cpumask
> @@ -205,22 +206,27 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask)
> */
> unsigned int cpumask_local_spread(unsigned int i, int node)
> {
> - int cpu;
> + int cpu, hk_flags;
> + const struct cpumask *mask;
>
> + hk_flags = HK_FLAG_DOMAIN | HK_FLAG_MANAGED_IRQ;
> + mask = housekeeping_cpumask(hk_flags);
> /* Wrap: we always want a cpu. */
> - i %= num_online_cpus();
> + i %= cpumask_weight(mask);
>
> if (node == NUMA_NO_NODE) {
> - for_each_cpu(cpu, cpu_online_mask)
> + for_each_cpu(cpu, mask) {
> if (i-- == 0)
> return cpu;
> + }
> } else {
> /* NUMA first. */
> - for_each_cpu_and(cpu, cpumask_of_node(node), cpu_online_mask)
> + for_each_cpu_and(cpu, cpumask_of_node(node), mask) {
> if (i-- == 0)
> return cpu;
> + }
>
> - for_each_cpu(cpu, cpu_online_mask) {
> + for_each_cpu(cpu, mask) {
> /* Skip NUMA nodes, done above. */
> if (cpumask_test_cpu(cpu, cpumask_of_node(node)))
> continue;
--
Nitesh

[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2020-06-29 21:12    [W:0.192 / U:0.308 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site