lkml.org 
[lkml]   [2023]   [Apr]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v6 2/3] sched/task: Add the put_task_struct_atomic_safe() function
From

On 4/14/23 08:55, Wander Lairson Costa wrote:
> Due to the possibility of indirectly acquiring sleeping locks, it is
> unsafe to call put_task_struct() in atomic contexts when the kernel is
> compiled with PREEMPT_RT.
>
> To mitigate this issue, this commit introduces
> put_task_struct_atomic_safe(), which schedules __put_task_struct()
> through call_rcu() when PREEMPT_RT is enabled. While a workqueue would
> be a more natural approach, we cannot allocate dynamic memory from
> atomic context in PREEMPT_RT, making the code more complex.
>
> This implementation ensures safe execution in atomic contexts and
> avoids any potential issues that may arise from using the non-atomic
> version.
>
> Signed-off-by: Wander Lairson Costa <wander@redhat.com>
> Reported-by: Hu Chunyu <chuhu@redhat.com>
> Cc: Paul McKenney <paulmck@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> ---
> include/linux/sched/task.h | 31 +++++++++++++++++++++++++++++++
> kernel/fork.c | 8 ++++++++
> 2 files changed, 39 insertions(+)
>
> diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
> index b597b97b1f8f..5c13b83d7008 100644
> --- a/include/linux/sched/task.h
> +++ b/include/linux/sched/task.h
> @@ -141,6 +141,37 @@ static inline void put_task_struct_many(struct task_struct *t, int nr)
>
> void put_task_struct_rcu_user(struct task_struct *task);
>
> +extern void __delayed_put_task_struct(struct rcu_head *rhp);
> +
> +static inline void put_task_struct_atomic_safe(struct task_struct *task)
> +{
> + if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
> + /*
> + * Decrement the refcount explicitly to avoid unnecessarily
> + * calling call_rcu.
> + */
> + if (refcount_dec_and_test(&task->usage))
> + /*
> + * under PREEMPT_RT, we can't call put_task_struct
> + * in atomic context because it will indirectly
> + * acquire sleeping locks.
> + * call_rcu() will schedule delayed_put_task_struct_rcu()
delayed_put_task_struct_rcu()?
> + * to be called in process context.
> + *
> + * __put_task_struct() is called called when
"called called"?
> + * refcount_dec_and_test(&t->usage) succeeds.
> + *
> + * This means that it can't "conflict" with
> + * put_task_struct_rcu_user() which abuses ->rcu the same
> + * way; rcu_users has a reference so task->usage can't be
> + * zero after rcu_users 1 -> 0 transition.

Note that put_task_struct_rcu_user() isn't the only user of task->rcu.
delayed_free_task() in kernel/fork.c also uses it, though it is only
called in the error case. Still you may need to take a look to make sure
that there is no conflict.

Cheers,
Longman

\
 
 \ /
  Last update: 2023-04-17 20:53    [W:0.067 / U:0.196 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site