Messages in this thread | | | From | Wander Lairson Costa <> | Date | Mon, 24 Apr 2023 17:34:29 -0300 | Subject | Re: [PATCH v6 2/3] sched/task: Add the put_task_struct_atomic_safe() function |
| |
On Mon, Apr 24, 2023 at 3:52 PM Paul E. McKenney <paulmck@kernel.org> wrote: > > On Mon, Apr 24, 2023 at 03:43:09PM -0300, Wander Lairson Costa wrote: > > On Mon, Apr 24, 2023 at 3:09 PM Paul E. McKenney <paulmck@kernel.org> wrote: > > > > > > On Fri, Apr 14, 2023 at 09:55:28AM -0300, Wander Lairson Costa wrote: > > > > Due to the possibility of indirectly acquiring sleeping locks, it is > > > > unsafe to call put_task_struct() in atomic contexts when the kernel is > > > > compiled with PREEMPT_RT. > > > > > > > > To mitigate this issue, this commit introduces > > > > put_task_struct_atomic_safe(), which schedules __put_task_struct() > > > > through call_rcu() when PREEMPT_RT is enabled. While a workqueue would > > > > be a more natural approach, we cannot allocate dynamic memory from > > > > atomic context in PREEMPT_RT, making the code more complex. > > > > > > > > This implementation ensures safe execution in atomic contexts and > > > > avoids any potential issues that may arise from using the non-atomic > > > > version. > > > > > > > > Signed-off-by: Wander Lairson Costa <wander@redhat.com> > > > > Reported-by: Hu Chunyu <chuhu@redhat.com> > > > > Cc: Paul McKenney <paulmck@kernel.org> > > > > Cc: Thomas Gleixner <tglx@linutronix.de> > > > > --- > > > > include/linux/sched/task.h | 31 +++++++++++++++++++++++++++++++ > > > > kernel/fork.c | 8 ++++++++ > > > > 2 files changed, 39 insertions(+) > > > > > > > > diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h > > > > index b597b97b1f8f..5c13b83d7008 100644 > > > > --- a/include/linux/sched/task.h > > > > +++ b/include/linux/sched/task.h > > > > @@ -141,6 +141,37 @@ static inline void put_task_struct_many(struct task_struct *t, int nr) > > > > > > > > void put_task_struct_rcu_user(struct task_struct *task); > > > > > > > > +extern void __delayed_put_task_struct(struct rcu_head *rhp); > > > > + > > > > +static inline void put_task_struct_atomic_safe(struct task_struct *task) > > > > +{ > > > > + if (IS_ENABLED(CONFIG_PREEMPT_RT)) { > > > > + /* > > > > + * Decrement the refcount explicitly to avoid unnecessarily > > > > + * calling call_rcu. > > > > + */ > > > > + if (refcount_dec_and_test(&task->usage)) > > > > + /* > > > > + * under PREEMPT_RT, we can't call put_task_struct > > > > + * in atomic context because it will indirectly > > > > + * acquire sleeping locks. > > > > + * call_rcu() will schedule delayed_put_task_struct_rcu() > > > > + * to be called in process context. > > > > + * > > > > + * __put_task_struct() is called called when > > > > + * refcount_dec_and_test(&t->usage) succeeds. > > > > + * > > > > + * This means that it can't "conflict" with > > > > + * put_task_struct_rcu_user() which abuses ->rcu the same > > > > + * way; rcu_users has a reference so task->usage can't be > > > > + * zero after rcu_users 1 -> 0 transition. > > > > + */ > > > > + call_rcu(&task->rcu, __delayed_put_task_struct); > > > > > > This will invoke __delayed_put_task_struct() with softirqs disabled. > > > Or do softirq-disabled contexts count as non-atomic in PREEMPT_RT? > > > > softirqs execute in thread context in PREEMPT_RT. We are good here. > > So the sleeping lock is a spinlock rather than (say) a mutex? >
Yes, under PREEMPT_RT, spinlocks are implemented in terms of rtmutex.
> Thanx, Paul > > > > > + } else { > > > > + put_task_struct(task); > > > > + } > > > > +} > > > > + > > > > /* Free all architecture-specific resources held by a thread. */ > > > > void release_thread(struct task_struct *dead_task); > > > > > > > > diff --git a/kernel/fork.c b/kernel/fork.c > > > > index 0c92f224c68c..9884794fe4b8 100644 > > > > --- a/kernel/fork.c > > > > +++ b/kernel/fork.c > > > > @@ -854,6 +854,14 @@ void __put_task_struct(struct task_struct *tsk) > > > > } > > > > EXPORT_SYMBOL_GPL(__put_task_struct); > > > > > > > > +void __delayed_put_task_struct(struct rcu_head *rhp) > > > > +{ > > > > + struct task_struct *task = container_of(rhp, struct task_struct, rcu); > > > > + > > > > + __put_task_struct(task); > > > > +} > > > > +EXPORT_SYMBOL_GPL(__delayed_put_task_struct); > > > > + > > > > void __init __weak arch_task_cache_init(void) { } > > > > > > > > /* > > > > -- > > > > 2.39.2 > > > > > > > > > >
| |