lkml.org 
[lkml]   [2020]   [Nov]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC] Are you good with Lockdep?
On Wed, Nov 11, 2020 at 09:36:09AM -0500, Steven Rostedt wrote:
> And this is especially true with lockdep, because lockdep only detects the
> deadlock, it doesn't tell you which lock was the incorrect locking.
>
> For example. If we have a locking chain of:
>
> A -> B -> D
>
> A -> C -> D
>
> Which on a correct system looks like this:
>
> lock(A)
> lock(B)
> unlock(B)
> unlock(A)
>
> lock(B)
> lock(D)
> unlock(D)
> unlock(B)
>
> lock(A)
> lock(C)
> unlock(C)
> unlock(A)
>
> lock(C)
> lock(D)
> unlock(D)
> unlock(C)
>
> which creates the above chains in that order.
>
> But, lets say we have a bug and the system boots up doing:
>
> lock(D)
> lock(A)
> unlock(A)
> unlock(D)
>
> which creates the incorrect chain.
>
> D -> A
>
>
> Now you do the correct locking:
>
> lock(A)
> lock(B)
>
> Creates A -> B
>
> lock(A)
> lock(C)
>
> Creates A -> C
>
> lock(B)
> lock(D)
>
> Creates B -> D and lockdep detects:
>
> D -> A -> B -> D
>
> and gives us the lockdep splat!!!
>
> But we don't disable lockdep. We let it continue...
>
> lock(C)
> lock(D)
>
> Which creates C -> D
>
> Now it explodes with D -> A -> C -> D

It would be better to check both so that we can choose either
breaking a single D -> A chain or both breaking A -> B -> D and
A -> C -> D.

> Which it already reported. And it can be much more complex when dealing
> with interrupt contexts and longer chains. That is, perhaps a different

IRQ context is much much worse than longer chains. I understand what you
try to explain.

> chain had a missing irq disable, now you might get 5 or 6 more lockdep
> splats because of that one bug.
>
> The point I'm making is that the lockdep splats after the first one may
> just be another version of the same bug and not a new one. Worse, if you
> only look at the later lockdep splats, it may be much more difficult to
> find the original bug than if you just had the first one. Believe me, I've

If the later lockdep splats make us more difficult to fix, then we can
look at the first one. If it's more informative, then we can check the
all splats. Anyway it's up to us.

> been down that road too many times!
>
> And it can be very difficult to know if new lockdep splats are not the same
> bug, and this will waste a lot of developers time!

Again, we don't have to waste time. We can go with the first one.

> This is why the decision to disable lockdep after the first splat was made.
> There were times I wanted to check locking somewhere, but is was using
> linux-next which had a lockdep splat that I didn't care about. So I
> made it not disable lockdep. And then I hit this exact scenario, that the
> one incorrect chain was causing reports all over the place. To solve it, I
> had to patch the incorrect chain to do raw locking to have lockdep ignore
> it ;-) Then I was able to test the code I was interested in.

It's not a problem of whether it's single-reporting or multi-reporting
but it's the problem of the lock creating the incorrect chain and making
you felt hard to handle.

Even if you were using single-reporting Lockdep, you anyway had to
continue to ignore locks in the same way until you got to the intest.

> I think I understand it. For things like completions and other "wait for
> events" we have lockdep annotation, but it is rather awkward to implement.
> Having something that says "lockdep_wait_event()" and
> "lockdep_exec_event()" wrappers would be useful.

Yes. It's a problem of lack of APIs. It can be done by reverting revert
of cross-release without big change. ;-)

Thanks,
Byungchul

\
 
 \ /
  Last update: 2020-11-12 11:34    [W:0.171 / U:0.816 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site