lkml.org 
[lkml]   [2016]   [Nov]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 1/4] locking/ww_mutex: Fix a deadlock affecting ww_mutexes
    On Thu, Nov 24, 2016 at 12:52:25PM +0100, Daniel Vetter wrote:
    > On Thu, Nov 24, 2016 at 12:40 PM, Peter Zijlstra <peterz@infradead.org> wrote:
    > >
    > >> I do believe we can win a bit by keeping the wait list sorted, if we also
    > >> make sure that waiters don't add themselves in the first place if they see
    > >> that a deadlock situation cannot be avoided.
    > >>
    > >> I will probably want to extend struct mutex_waiter with ww_mutex-specific
    > >> fields to facilitate this (i.e. ctx pointer, perhaps stamp as well to reduce
    > >> pointer-chasing). That should be fine since it lives on the stack.
    > >
    > > Right, shouldn't be a problem I think.
    > >
    > > The only 'problem' I can see with using that is that its possible to mix
    > > ww and !ww waiters through ww_mutex_lock(.ctx = NULL). This makes the
    > > list order somewhat tricky.
    > >
    > > Ideally we'd remove that feature, although I see its actually used quite
    > > a bit :/
    >
    > I guess we could create a small fake acquire_ctx for single-lock
    > paths. That way callers still don't need to deal with having an
    > explicit ctx, but we can assume the timestamp (for ensuring fairness)
    > is available for all cases. Otherwise there's indeed a problem with
    > correctly (well fairly) interleaving ctx and non-ctx lockers I think.

    Actually tried that, but we need a ww_class to get a stamp from, and
    ww_mutex_lock() doesn't have one of those..

    \
     
     \ /
      Last update: 2016-11-24 13:20    [W:5.208 / U:0.236 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site