lkml.org 
[lkml]   [2022]   [Jul]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH] workqueue: avoid re-entry of pwq->pool->lock through __queue_work
On Wed, Jul 27, 2022 at 7:04 PM Kassey Li <quic_yingangl@quicinc.com> wrote:
>
> [0:swapper/4]BUG: spinlock recursion on CPU#4, swapper/4/0
> [0:swapper/4]lock: 0xffffff8000c0f400, .magic: dead4ead, .owner:
> swapper/4/0, .owner_cpu: 4
> [0:swapper/4]CPU: 4 PID: 0 Comm: swapper/4 Tainted: G S
> [0:swapper/4]Call trace:
> [0:swapper/4] dump_backtrace.cfi_jt+0x0/0x8
> [0:swapper/4] show_stack+0x1c/0x2c
> [0:swapper/4] dump_stack_lvl+0xd8/0x16c
> [0:swapper/4] spin_dump+0x104/0x278
> [0:swapper/4] do_raw_spin_lock+0xec/0x15c
> [0:swapper/4] _raw_spin_lock+0x28/0x3c
> [0:swapper/4] __queue_work+0x1fc/0x618
> [0:swapper/4] queue_work_on+0x64/0x134
> [0:swapper/4] memlat_hrtimer_handler+0x28/0x3c [memlat]
> [0:swapper/4] __run_hrtimer+0xe8/0x448
> [0:swapper/4] hrtimer_interrupt+0x184/0x40c
> [0:swapper/4] arch_timer_handler_virt+0x5c/0x98
> [0:swapper/4] handle_percpu_devid_irq+0xd8/0x3e0
> [0:swapper/4] __handle_domain_irq+0xd0/0x19c
> [0:swapper/4] gic_handle_irq+0x6c/0x134
> [0:swapper/4] el1_irq+0xe4/0x1c0

It seems it is an unexpected IRQ.

> [0:swapper/4] _raw_spin_unlock_irqrestore+0x2c/0x60
> [0:swapper/4] try_to_wake_up.llvm.14610847381734009831+0x334/0x888
> [0:swapper/4] wake_up_process+0x1c/0x2c
> [0:swapper/4] __queue_work+0x3e8/0x618
> [0:swapper/4] delayed_work_timer_fn+0x24/0x34

delayed_work_timer_fn() should have been invoked with IRQ disabled
since it is TIMER_IRQSAFE.

Could you add some code to check if it is the case if possible, please?

> [0:swapper/4] call_timer_fn+0x58/0x268
> [0:swapper/4] expire_timers+0xe0/0x1c4

Or could you do a "disass expire_timers+0xe0" in GDB?

> [0:swapper/4] __run_timers+0x16c/0x1c4
> [0:swapper/4] run_timer_softirq+0x34/0x60
> [0:swapper/4] efi_header_end+0x198/0x59c
> [0:swapper/4] __irq_exit_rcu+0xdc/0xf0
> [0:swapper/4] irq_exit+0x14/0x50
> [0:swapper/4] __handle_domain_irq+0xd4/0x19c
> [0:swapper/4] gic_handle_irq+0x6c/0x134
> [0:swapper/4] el1_irq+0xe4/0x1c0
> [0:swapper/4] cpuidle_enter_state+0x1b4/0x5dc
> [0:swapper/4] cpuidle_enter+0x3c/0x58
> [0:swapper/4] do_idle.llvm.6296834828977863291+0x1f4/0x2e8
> [0:swapper/4] cpu_startup_entry+0x28/0x2c
> [0:swapper/4] secondary_start_kernel+0x1c8/0x230
>
> Signed-off-by: Kassey Li <quic_yingangl@quicinc.com>
> ---
> kernel/workqueue.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 1ea50f6be843..f23491f373b1 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -1468,10 +1468,10 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
> } else {
> /* meh... not running there, queue here */
> raw_spin_unlock(&last_pool->lock);
> - raw_spin_lock(&pwq->pool->lock);
> + raw_spin_lock_irq(&pwq->pool->lock);
> }
> } else {
> - raw_spin_lock(&pwq->pool->lock);
> + raw_spin_lock_irq(&pwq->pool->lock);
> }
>
> /*
> @@ -1484,7 +1484,7 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
> */
> if (unlikely(!pwq->refcnt)) {
> if (wq->flags & WQ_UNBOUND) {
> - raw_spin_unlock(&pwq->pool->lock);
> + raw_spin_unlock_irq(&pwq->pool->lock);

The patch is hardly correct, __queue_work() is called with irq-disabled,
this code will enable IRQ imbalanced.

> cpu_relax();
> goto retry;
> }
> @@ -1517,7 +1517,7 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
> insert_work(pwq, work, worklist, work_flags);
>
> out:
> - raw_spin_unlock(&pwq->pool->lock);
> + raw_spin_unlock_irq(&pwq->pool->lock);
> rcu_read_unlock();
> }
>
> --
> 2.17.1
>

\
 
 \ /
  Last update: 2022-07-27 19:42    [W:0.609 / U:0.092 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site