lkml.org 
[lkml]   [2021]   [Nov]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
SubjectRe: [PATCH] sched/fair: Prevent dead task groups from regaining cfs_rq's
Date
Am 06.11.21 um 11:48 schrieb Peter Zijlstra:
> On Fri, Nov 05, 2021 at 05:29:14PM +0100, Mathias Krause wrote:
>>> Looks like it needs to be the kfree_rcu() one in this case. I'll prepare
>>> a patch.
>>
>> Testing the below patch right now. Looking good so far. Will prepare a
>> proper patch later, if we all can agree that this covers all cases.
>>
>> But the basic idea is to defer the kfree()'s to after the next RCU GP,
>> which also means we need to free the tg object itself later. Slightly
>> ugly. :/
>
> How's this then?

Well, slightly more code churn, but looks cleaner indeed -- no tg_free()
hack. Just one bit's missing IMHO, see below.

>
> ---
> diff --git a/kernel/sched/autogroup.c b/kernel/sched/autogroup.c
> index 2067080bb235..8629b37d118e 100644
> --- a/kernel/sched/autogroup.c
> +++ b/kernel/sched/autogroup.c
> @@ -31,7 +31,7 @@ static inline void autogroup_destroy(struct kref *kref)
> ag->tg->rt_se = NULL;
> ag->tg->rt_rq = NULL;
> #endif
> - sched_offline_group(ag->tg);
> + sched_release_group(ag->tg);
> sched_destroy_group(ag->tg);
> }
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 9cb81ef8acc8..22528bd61ba5 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -9715,6 +9715,21 @@ static void sched_free_group(struct task_group *tg)
> kmem_cache_free(task_group_cache, tg);
> }
>
> +static void sched_free_group_rcu(struct rcu_head *rcu)
> +{
> + sched_free_group(container_of(rcu, struct task_group, rcu_head));
^^^^^^^^
This should be 'rcu'.

> +}
> +
> +static void sched_unregister_group(struct task_group *tg)
> +{

The timers need to be destroyed prior to unregister_fair_sched_group()
via destroy_cfs_bandwidth(tg_cfs_bandwidth(tg)), i.e. move it from
free_fair_sched_group() to here, as I did in my patch. Otherwise the tg
might still be messed with and we don't want that.

> + unregister_fair_sched_group(tg);
> + /*
> + * We have to wait for yet another RCU grace period to expire, as
> + * print_cfs_stats() might run concurrently.
> + */
> + call_rcu(&tg->rcu, sched_free_group_rcu);
> +}
> +
> /* allocate runqueue etc for a new task group */
> struct task_group *sched_create_group(struct task_group *parent)
> {
> @@ -9735,7 +9750,7 @@ struct task_group *sched_create_group(struct task_group *parent)
> return tg;
>
> err:
> - sched_free_group(tg);
> + sched_unregister_group(tg);
> return ERR_PTR(-ENOMEM);
> }
>
> @@ -9758,25 +9773,35 @@ void sched_online_group(struct task_group *tg, struct task_group *parent)
> }
>
> /* rcu callback to free various structures associated with a task group */
> -static void sched_free_group_rcu(struct rcu_head *rhp)
> +static void sched_unregister_group_rcu(struct rcu_head *rhp)
> {
> /* Now it should be safe to free those cfs_rqs: */
> - sched_free_group(container_of(rhp, struct task_group, rcu));
> + sched_unregister_group(container_of(rhp, struct task_group, rcu));
> }
>
> void sched_destroy_group(struct task_group *tg)
> {
> /* Wait for possible concurrent references to cfs_rqs complete: */
> - call_rcu(&tg->rcu, sched_free_group_rcu);
> + call_rcu(&tg->rcu, sched_unregister_group_rcu);
> }
>
> -void sched_offline_group(struct task_group *tg)
> +void sched_release_group(struct task_group *tg)
> {
> unsigned long flags;
>
> - /* End participation in shares distribution: */
> - unregister_fair_sched_group(tg);
> -
> + /*
> + * Unlink first, to avoid walk_tg_tree_from() from finding us (via
> + * sched_cfs_period_timer()).
> + *
> + * For this to be effective, we have to wait for all pending users of
> + * this task group to leave their RCU critical section to ensure no new
> + * user will see our dying task group any more. Specifically ensure
> + * that tg_unthrottle_up() won't add decayed cfs_rq's to it.
> + *
> + * We therefore defer calling unregister_fair_sched_group() to
> + * sched_unregister_group() which is guarantied to get called only after the
> + * current RCU grace period has expired.
> + */
> spin_lock_irqsave(&task_group_lock, flags);
> list_del_rcu(&tg->list);
> list_del_rcu(&tg->siblings);
> @@ -9895,7 +9920,7 @@ static void cpu_cgroup_css_released(struct cgroup_subsys_state *css)
> {
> struct task_group *tg = css_tg(css);
>
> - sched_offline_group(tg);
> + sched_release_group(tg);
> }
>
> static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
> @@ -9905,7 +9930,7 @@ static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
> /*
> * Relies on the RCU grace period between css_released() and this.
> */
> - sched_free_group(tg);
> + sched_unregister_group(tg);
> }
>
> /*
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index f0b249ec581d..20038274c57b 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -504,7 +504,7 @@ extern struct task_group *sched_create_group(struct task_group *parent);
> extern void sched_online_group(struct task_group *tg,
> struct task_group *parent);
> extern void sched_destroy_group(struct task_group *tg);
> -extern void sched_offline_group(struct task_group *tg);
> +extern void sched_release_group(struct task_group *tg);
>
> extern void sched_move_task(struct task_struct *tsk);
>

Beside that, looks good to me. Will you create a new proper patch or
should I do it?

Thanks,
Mathias

\
 
 \ /
  Last update: 2021-11-08 11:28    [W:0.246 / U:0.212 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site