lkml.org 
[lkml]   [2020]   [Nov]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: lockdep: possible irq lock inversion dependency detected (trig->leddev_list_lock)
On Sat, Oct 31, 2020 at 06:17:40PM +0800, Boqun Feng wrote:
> Hi Andrea,
>
> On Sun, Nov 01, 2020 at 10:26:14AM +0100, Andrea Righi wrote:
> > I'm getting the following lockdep splat (see below).
> >
> > Apparently this warning starts to be reported after applying:
> >
> > e918188611f0 ("locking: More accurate annotations for read_lock()")
> >
> > It looks like a false positive to me, but it made me think a bit and
> > IIUC there can be still a potential deadlock, even if the deadlock
> > scenario is a bit different than what lockdep is showing.
> >
> > In the assumption that read-locks are recursive only in_interrupt()
> > context (as stated in e918188611f0), the following scenario can still
> > happen:
> >
> > CPU0 CPU1
> > ---- ----
> > read_lock(&trig->leddev_list_lock);
> > write_lock(&trig->leddev_list_lock);
> > <soft-irq>
> > kbd_bh()
> > -> read_lock(&trig->leddev_list_lock);
> >
> > *** DEADLOCK ***
> >
> > The write-lock is waiting on CPU1 and the second read_lock() on CPU0
> > would be blocked by the write-lock *waiter* on CPU1 => deadlock.
> >
>
> No, this is not a deadlock, as a write-lock waiter only blocks
> *non-recursive* readers, so since the read_lock() in kbd_bh() is called
> in soft-irq (which in_interrupt() returns true), so it's a recursive
> reader and won't get blocked by the write-lock waiter.

That's right, I was missing that in_interrupt() returns true also from
soft-irq context.

>
> > In that case we could prevent this deadlock condition using a workqueue
> > to call kbd_propagate_led_state() instead of calling it directly from
> > kbd_bh() (even if lockdep would still report the false positive).
> >
>
> The deadlock senario reported by the following splat is:
>
>
> CPU 0: CPU 1: CPU 2:
> ----- ----- -----
> led_trigger_event():
> read_lock(&trig->leddev_list_lock);
> <work queue processing>
> ata_hsm_qs_complete():
> spin_lock_irqsave(&host->lock);
> write_lock(&trig->leddev_list_lock);
> ata_port_freeze():
> ata_do_link_abort():
> ata_qc_complete():
> ledtrig_disk_activity():
> led_trigger_blink_oneshot():
> read_lock(&trig->leddev_list_lock);
> // ^ not in in_interrupt() context, so could get blocked by CPU 2
> <interrupt>
> ata_bmdma_interrupt():
> spin_lock_irqsave(&host->lock);
>
> , where CPU 0 is blocked by CPU 1 because of the spin_lock_irqsave() in
> ata_bmdma_interrupt() and CPU 1 is blocked by CPU 2 because of the
> read_lock() in led_trigger_blink_oneshot() and CPU 2 is blocked by CPU 0
> because of an arbitrary writer on &trig->leddev_list_lock.
>
> So I don't think it's false positive, but I might miss something
> obvious, because I don't know what the code here actually does ;-)

With the CPU2 part it all makes sense now and lockdep was right. :)

At this point I think we could just schedule a separate work to do the
led trigger and avoid calling it with host->lock held and that should
prevent the deadlock. I'll send a patch to do that.

Thanks tons for you detailed explanation!

-Andrea

\
 
 \ /
  Last update: 2020-11-02 08:34    [W:0.326 / U:0.296 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site