Messages in this thread | | | Date | Tue, 10 Jan 2023 14:27:25 -0800 | From | "Paul E. McKenney" <> | Subject | Re: [PATCH] sched/deadline: fix inactive_task_timer splat with CONFIG_PREEMPT_RT |
| |
On Tue, Jan 10, 2023 at 05:52:03PM -0300, Wander Lairson Costa wrote: > On Mon, Jan 9, 2023 at 10:40 PM Paul E. McKenney <paulmck@kernel.org> wrote: > > > > On Wed, Jan 04, 2023 at 03:17:01PM -0300, Wander Lairson Costa wrote: > > > inactive_task_timer() executes in interrupt (atomic) context. It calls > > > put_task_struct(), which indirectly acquires sleeping locks under > > > PREEMPT_RT. > > > > > > Below is an example of a splat that happened in a test environment: > > > > > > CPU: 1 PID: 2848 Comm: life Kdump: loaded Tainted: G W --------- > > > Hardware name: HP ProLiant DL388p Gen8, BIOS P70 07/15/2012 > > > Call Trace: > > > dump_stack_lvl+0x57/0x7d > > > mark_lock_irq.cold+0x33/0xba > > > ? stack_trace_save+0x4b/0x70 > > > ? save_trace+0x55/0x150 > > > mark_lock+0x1e7/0x400 > > > mark_usage+0x11d/0x140 > > > __lock_acquire+0x30d/0x930 > > > lock_acquire.part.0+0x9c/0x210 > > > ? refill_obj_stock+0x3d/0x3a0 > > > ? rcu_read_lock_sched_held+0x3f/0x70 > > > ? trace_lock_acquire+0x38/0x140 > > > ? lock_acquire+0x30/0x80 > > > ? refill_obj_stock+0x3d/0x3a0 > > > rt_spin_lock+0x27/0xe0 > > > ? refill_obj_stock+0x3d/0x3a0 > > > refill_obj_stock+0x3d/0x3a0 > > > ? inactive_task_timer+0x1ad/0x340 > > > kmem_cache_free+0x357/0x560 > > > inactive_task_timer+0x1ad/0x340 > > > ? switched_from_dl+0x2d0/0x2d0 > > > __run_hrtimer+0x8a/0x1a0 > > > __hrtimer_run_queues+0x91/0x130 > > > hrtimer_interrupt+0x10f/0x220 > > > __sysvec_apic_timer_interrupt+0x7b/0xd0 > > > sysvec_apic_timer_interrupt+0x4f/0xd0 > > > ? asm_sysvec_apic_timer_interrupt+0xa/0x20 > > > asm_sysvec_apic_timer_interrupt+0x12/0x20 > > > RIP: 0033:0x7fff196bf6f5 > > > > > > Instead of calling put_task_struct() directly, we defer it using > > > call_rcu(). A more natural approach would use a workqueue, but since > > > in PREEMPT_RT, we can't allocate dynamic memory from atomic context, > > > the code would become more complex because we would need to put the > > > work_struct instance in the task_struct and initialize it when we > > > allocate a new task_struct. > > > > > > Signed-off-by: Wander Lairson Costa <wander@redhat.com> > > > Cc: Paul McKenney <paulmck@kernel.org> > > > Cc: Thomas Gleixner <tglx@linutronix.de> > > > --- > > > kernel/sched/build_policy.c | 1 + > > > kernel/sched/deadline.c | 24 +++++++++++++++++++++++- > > > 2 files changed, 24 insertions(+), 1 deletion(-) > > > > > > diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c > > > index d9dc9ab3773f..f159304ee792 100644 > > > --- a/kernel/sched/build_policy.c > > > +++ b/kernel/sched/build_policy.c > > > @@ -28,6 +28,7 @@ > > > #include <linux/suspend.h> > > > #include <linux/tsacct_kern.h> > > > #include <linux/vtime.h> > > > +#include <linux/rcupdate.h> > > > > > > #include <uapi/linux/sched/types.h> > > > > > > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c > > > index 9ae8f41e3372..ab9301d4cc24 100644 > > > --- a/kernel/sched/deadline.c > > > +++ b/kernel/sched/deadline.c > > > @@ -1405,6 +1405,13 @@ static void update_curr_dl(struct rq *rq) > > > } > > > } > > > > > > +static void delayed_put_task_struct(struct rcu_head *rhp) > > > +{ > > > + struct task_struct *task = container_of(rhp, struct task_struct, rcu); > > > + > > > + __put_task_struct(task); > > > > Please note that BH is disabled here. Don't you therefore > > need to schedule a workqueue handler? Perhaps directly from > > inactive_task_timer(), or maybe from this point. If the latter, one > > way to skip the extra step is to use queue_rcu_work(). > > > > My initial work was using a workqueue [1,2]. However, I realized I > could reach a much simpler code with call_rcu(). > I am afraid my ignorance doesn't allow me to get your point. Does > disabling softirq imply atomic context?
Given that this problem occurred in PREEMPT_RT, I am assuming that the appropriate definition of "atomic context" is "cannot call schedule()". And you are in fact not permitted to call schedule() from a bh-disabled region.
This also means that you cannot acquire a non-raw spinlock in a bh-disabled region of code in a PREEMPT_RT kernel, because doing so can invoke schedule.
Of course, using a workqueue does incur needless overhead in non-PREEMPT_RT kernels. So one alternative approach is to use the workqueue only in PREEMPT_RT kernels and to just invoke __put_task_struct() directly (without call_rcu() along the way) otherwise.
Does that help, or am I missing your point?
Thanx, Paul
> [1] https://gitlab.com/walac/kernel-ark/-/commit/ec8addbe38d5c318f1789b4c0fa480a9d2afdb65 > [2] https://gitlab.com/walac/kernel-ark/-/commit/0bde233235ffed233a7466a36a4866bc48064f54 > > > > Thanx, Paul > > > > > +} > > > + > > > static enum hrtimer_restart inactive_task_timer(struct hrtimer *timer) > > > { > > > struct sched_dl_entity *dl_se = container_of(timer, > > > @@ -1442,7 +1449,22 @@ static enum hrtimer_restart inactive_task_timer(struct hrtimer *timer) > > > dl_se->dl_non_contending = 0; > > > unlock: > > > task_rq_unlock(rq, p, &rf); > > > - put_task_struct(p); > > > + > > > + if (IS_ENABLED(CONFIG_PREEMPT_RT)) { > > > + /* > > > + * Decrement the refcount explicitly to avoid unnecessarily > > > + * calling call_rcu. > > > + */ > > > + if (refcount_dec_and_test(&p->usage)) > > > + /* > > > + * under PREEMPT_RT, we can't call put_task_struct > > > + * in atomic context because it will indirectly > > > + * acquire sleeping locks. > > > + */ > > > + call_rcu(&p->rcu, delayed_put_task_struct); > > > + } else { > > > + put_task_struct(p); > > > + } > > > > > > return HRTIMER_NORESTART; > > > } > > > -- > > > 2.39.0 > > > > > >
| |