Messages in this thread | | | Date | Thu, 12 Oct 2023 12:35:21 -0400 | Subject | Re: [PATCH] cgroup/cpuset: Change nr_deadline_tasks to an atomic_t value | From | Waiman Long <> |
| |
On 10/11/23 08:54, Waiman Long wrote: > > On 10/11/23 04:14, Juri Lelli wrote: >> On 10/10/23 16:03, Waiman Long wrote: >>> On 10/10/23 15:44, Waiman Long wrote: >>>> On 10/10/23 01:34, Juri Lelli wrote: >>>>> Hi, >>>>> >>>>> On 09/10/23 15:15, Waiman Long wrote: >>>>>> The nr_deadline_tasks field in cpuset structure was introduced by >>>>>> commit 6c24849f5515 ("sched/cpuset: Keep track of SCHED_DEADLINE >>>>>> task >>>>>> in cpusets"). Unlike nr_migrate_dl_tasks which is only modified >>>>>> under >>>>>> cpuset_mutex, nr_deadline_tasks can be updated in various contexts >>>>>> under different locks. As a result, data races may happen that cause >>>>>> incorrect value to be stored in nr_deadline_tasks leading to >>>>>> incorrect >>>>> Could you please make an example of such data races? >>>> Since update to cs->nr_deadline_tasks is not protected by a single >>>> lock, >>>> it is possible that multiple CPUs may try to modify it at the same >>>> time. It is possible that nr_deadline_tasks++ and nr_deadline_tasks-- >>>> can be done in a single instruction like in x86 and hence atomic. >>>> However, operation like "cs->nr_deadline_tasks += >>>> cs->nr_migrate_dl_tasks" is likely a RMW operation and so is subjected >>>> to racing. It is mostly theoretical, but probably not impossible. >>> Sorry, even increment and decrement operators are not atomic. >>> >>> inc_dl_tasks_cs() is only called from switched_to_dl() in deadline.c >>> which >>> is protected by the rq_lock, but there are multiple rq's. >>> dec_dl_tasks_cs() >>> is called from switched_from_dl() in deadline.c and cgroup_exit() in >>> cgroup.c. The later one is protected by css_set_lock. The other >>> place where >>> nr_deadline_tasks can be changed is in cpuset_attach() protected by >>> cpuset_mutex. >> So, let's see. :) >> >> switched_to_dl(), switched_from_dl() and cpuset_attach() should all be >> protected (for DEADLINE tasks) by cpuset_mutex, see [1] for the former >> two. > Yes, I missed the cpuset_lock() call. >> What leaves me perplexed is indeed cgroup_exit(), which seems to operate >> under css_set_lock as you say. I however wonder why is that not racy >> already wrt, say, cpuset_attach() which AFAIU uses css information w/o >> holding css_set_lock? > > The css_set_lock protects changes made to css_set. Looking at > cgroup_migrate_execute(), css_set_lock is taken when the tasks are > actually moving from one css_set to another one. cpuset_attach() is > called just to update the CPU and node affinity and cpuset_mutex is > taken to ensure stability of the CPU and node masks. There is no > change to css_set and so css_set_lock isn't needed. > > We can argue that there can be racing between cgroup_exit() and the > iteration of tasks in cpuset_attach() or cpuset_can_attach(). An > rcu_read_lock() is probably needed. I am stilling investigating that.
Cgroup has a rather complex task migration and iteration scheme. According to the following comments in include/linux/cgroup-defs.h:
/* * Lists running through all tasks using this cgroup group. * mg_tasks lists tasks which belong to this cset but are in the * process of being migrated out or in. Protected by * css_set_lock, but, during migration, once tasks are moved to * mg_tasks, it can be read safely while holding cgroup_mutex. */ struct list_head tasks; struct list_head mg_tasks; struct list_head dying_tasks;
I haven't fully figured out how that protection works yet. Assuming that is the case, task iteration in cpuset_attach() should be fine since cgroup_mutex is indeed held when it is invoked. That protection, however, does not applied to nr_deadline_tasks. It may be too costly to acquire cpuset_mutex before updating nr_deadline_tasks in cgroup_exit(). So changing it to an atomic_t should be the easy way out of the potential racing problem.
I can update the commit log with these new analysis if you have no further objection to this change.
Cheers, Longman
| |