Messages in this thread Patch in this message | | | From | Valentin Schneider <> | Subject | [RFC PATCH] sched/core: Fix premature p->migration_pending completion | Date | Wed, 27 Jan 2021 19:30:35 +0000 |
| |
Fiddling some more with a TLA+ model of set_cpus_allowed_ptr() & friends unearthed one more outstanding issue. This doesn't even involve migrate_disable(), but rather affinity changes and execution of the stopper racing with each other.
My own interpretation of the (lengthy) TLA+ splat (note the potential for errors at each level) is:
Initial conditions: victim.cpus_mask = {CPU0, CPU1}
CPU0 CPU1 CPU<don't care>
switch_to(victim) set_cpus_allowed(victim, {CPU1}) kick CPU0 migration_cpu_stop({.dest_cpu = CPU1}) switch_to(stopper/0) // e.g. CFS load balance move_queued_task(CPU0, victim, CPU1); switch_to(victim) set_cpus_allowed(victim, {CPU0}); task_rq_unlock(); migration_cpu_stop(dest_cpu=CPU1) task_rq(p) != rq && pending kick CPU1 migration_cpu_stop({.dest_cpu = CPU1})
switch_to(stopper/1) migration_cpu_stop(dest_cpu=CPU1) task_rq(p) == rq && pending __migrate_task(dest_cpu) // no-op complete_all() <-- !!! affinity is {CPU0} !!!
I believe there are two issues there: - retriggering of migration_cpu_stop() from within migration_cpu_stop() itself doesn't change arg.dest_cpu - we'll issue a complete_all() in the task_rq(p) == rq path of migration_cpu_stop() even if the dest_cpu has been superseded by a further affinity change.
Something similar could happen with NUMA's migrate_task_to(), and arguably any other user of migration_cpu_stop() with a .dest_cpu >= 0. Consider:
CPU0 CPUX
switch_to(victim) migrate_task_to(victim, CPU1) kick CPU0 migration_cpu_stop({.dest_cpu = CPU1})
set_cpus_allowed(victim, {CPU42}) task_rq_unlock(); switch_to(stopper/0) migration_cpu_stop(dest_cpu=CPU1) task_rq(p) == rq && pending __migrate_task(dest_cpu) complete_all() <-- !!! affinity is {CPU42} !!!
Prevent such premature completions by ensuring the dest_cpu in migration_cpu_stop() is in the task's allowed cpumask.
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> --- kernel/sched/core.c | 32 ++++++++++++++++++++------------ 1 file changed, 20 insertions(+), 12 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 06b449942adf..b57326b0a742 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1923,20 +1923,28 @@ static int migration_cpu_stop(void *data) complete = true; } - /* migrate_enable() -- we must not race against SCA */ - if (dest_cpu < 0) { - /* - * When this was migrate_enable() but we no longer - * have a @pending, a concurrent SCA 'fixed' things - * and we should be valid again. Nothing to do. - */ - if (!pending) { - WARN_ON_ONCE(!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)); - goto out; - } + /* + * When this was migrate_enable() but we no longer + * have a @pending, a concurrent SCA 'fixed' things + * and we should be valid again. + * + * This can also be a stopper invocation that was 'fixed' by an + * earlier one. + * + * Nothing to do. + */ + if ((dest_cpu < 0 || dest_cpu == cpu_of(rq)) && !pending) { + WARN_ON_ONCE(!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)); + goto out; + } + /* + * Catch any affinity change between the stop_cpu() call and us + * getting here. + * For migrate_enable(), we just want to pick an allowed one. + */ + if (dest_cpu < 0 || !cpumask_test_cpu(dest_cpu, &p->cpus_mask)) dest_cpu = cpumask_any_distribute(&p->cpus_mask); - } if (task_on_rq_queued(p)) rq = __migrate_task(rq, &rf, p, dest_cpu); -- 2.27.0
| |