lkml.org 
[lkml]   [2021]   [Jun]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH v10 01/16] sched: Introduce task_cpu_possible_mask() to limit fallback rq selection
    Date
    Asymmetric systems may not offer the same level of userspace ISA support
    across all CPUs, meaning that some applications cannot be executed by
    some CPUs. As a concrete example, upcoming arm64 big.LITTLE designs do
    not feature support for 32-bit applications on both clusters.

    On such a system, we must take care not to migrate a task to an
    unsupported CPU when forcefully moving tasks in select_fallback_rq()
    in response to a CPU hot-unplug operation.

    Introduce a task_cpu_possible_mask() hook which, given a task argument,
    allows an architecture to return a cpumask of CPUs that are capable of
    executing that task. The default implementation returns the
    cpu_possible_mask, since sane machines do not suffer from per-cpu ISA
    limitations that affect scheduling. The new mask is used when selecting
    the fallback runqueue as a last resort before forcing a migration to the
    first active CPU.

    Reviewed-by: Valentin Schneider <Valentin.Schneider@arm.com>
    Reviewed-by: Quentin Perret <qperret@google.com>
    Signed-off-by: Will Deacon <will@kernel.org>
    ---
    include/linux/mmu_context.h | 14 ++++++++++++++
    kernel/sched/core.c | 5 ++---
    2 files changed, 16 insertions(+), 3 deletions(-)

    diff --git a/include/linux/mmu_context.h b/include/linux/mmu_context.h
    index 03dee12d2b61..b9b970f7ab45 100644
    --- a/include/linux/mmu_context.h
    +++ b/include/linux/mmu_context.h
    @@ -14,4 +14,18 @@
    static inline void leave_mm(int cpu) { }
    #endif

    +/*
    + * CPUs that are capable of running user task @p. Must contain at least one
    + * active CPU. It is assumed that the kernel can run on all CPUs, so calling
    + * this for a kernel thread is pointless.
    + *
    + * By default, we assume a sane, homogeneous system.
    + */
    +#ifndef task_cpu_possible_mask
    +# define task_cpu_possible_mask(p) cpu_possible_mask
    +# define task_cpu_possible(cpu, p) true
    +#else
    +# define task_cpu_possible(cpu, p) cpumask_test_cpu((cpu), task_cpu_possible_mask(p))
    +#endif
    +
    #endif
    diff --git a/kernel/sched/core.c b/kernel/sched/core.c
    index 5226cc26a095..0c1b6f1a6c91 100644
    --- a/kernel/sched/core.c
    +++ b/kernel/sched/core.c
    @@ -1814,7 +1814,7 @@ static inline bool is_cpu_allowed(struct task_struct *p, int cpu)

    /* Non kernel threads are not allowed during either online or offline. */
    if (!(p->flags & PF_KTHREAD))
    - return cpu_active(cpu);
    + return cpu_active(cpu) && task_cpu_possible(cpu, p);

    /* KTHREAD_IS_PER_CPU is always allowed. */
    if (kthread_is_per_cpu(p))
    @@ -2792,10 +2792,9 @@ static int select_fallback_rq(int cpu, struct task_struct *p)
    *
    * More yuck to audit.
    */
    - do_set_cpus_allowed(p, cpu_possible_mask);
    + do_set_cpus_allowed(p, task_cpu_possible_mask(p));
    state = fail;
    break;
    -
    case fail:
    BUG();
    break;
    --
    2.32.0.93.g670b81a890-goog
    \
     
     \ /
      Last update: 2021-06-23 19:39    [W:2.438 / U:0.132 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site