lkml.org 
[lkml]   [2023]   [Jul]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] cgroup/cpuset: update parent subparts cpumask while holding css refcnt
From
On 7/10/23 11:11, Michal Koutný wrote:
> Hello.
>
> On Sat, Jul 01, 2023 at 02:50:49PM +0800, Miaohe Lin <linmiaohe@huawei.com> wrote:
>> --- a/kernel/cgroup/cpuset.c
>> +++ b/kernel/cgroup/cpuset.c
>> @@ -1806,9 +1806,12 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
>> cpuset_for_each_child(cp, css, parent)
>> if (is_partition_valid(cp) &&
>> cpumask_intersects(trialcs->cpus_allowed, cp->cpus_allowed)) {
>> + if (!css_tryget_online(&cp->css))
>> + continue;
>> rcu_read_unlock();
>> update_parent_subparts_cpumask(cp, partcmd_invalidate, NULL, &tmp);
>> rcu_read_lock();
>> + css_put(&cp->css);
> Apologies for a possibly noob question -- why is RCU read lock
> temporarily dropped within the loop?
> (Is it only because of callback_lock or cgroup_file_kn_lock (via
> notify_partition_change()) on PREEMPT_RT?)
>
>
>
> [
> OT question:
> cpuset_for_each_child(cp, css, parent) (1)
> if (is_partition_valid(cp) &&
> cpumask_intersects(trialcs->cpus_allowed, cp->cpus_allowed)) {
> if (!css_tryget_online(&cp->css))
> continue;
> rcu_read_unlock();
> update_parent_subparts_cpumask(cp, partcmd_invalidate, NULL, &tmp);
> ...
> update_tasks_cpumask(cp->parent)
> ...
> css_task_iter_start(&cp->parent->css, 0, &it); (2)
> ...
> rcu_read_lock();
> css_put(&cp->css);
> }
>
> May this touch each task same number of times as its depth within
> herarchy?

I believe the primary reason is because update_parent_subparts_cpumask()
can potential run for quite a while. So we don't want to hold the
rcu_read_lock for too long. There may also be a potential that
schedule() may be called.

Cheers,
Longman

\
 
 \ /
  Last update: 2023-07-10 17:41    [W:0.161 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site