lkml.org 
[lkml]   [2014]   [Feb]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH] futex: Remove requirement for lock_page in get_futex_key
On Wed, 30 Oct 2013, Mel Gorman wrote:
> On Wed, Oct 30, 2013 at 09:45:31AM +0100, Thomas Gleixner wrote:
> > On Tue, 29 Oct 2013, Mel Gorman wrote:
> >
> > > Thomas Gleixner and Peter Zijlstra discussed off-list that real-time users
> > > currently have a problem with the page lock being contended for unbounded
> > > periods of time during futex operations. The three of us discussed the
> > > possibiltity that the page lock is unnecessary in this case because we are
> > > not concerned with the usual races with reclaim and page cache updates. For
> > > anonymous pages, the associated futex object is the mm_struct which does
> > > not require the page lock. For inodes, we should be able to check under
> > > RCU read lock if the page mapping is still valid to take a reference to
> > > the inode. This just leaves one rare race that requires the page lock
> > > in the slow path. This patch does not completely eliminate the page lock
> > > but it should reduce contention in the majority of cases.
> > >
> > > Patch boots and futextest did not explode but I did no comparison
> > > performance tests. Thomas, do you have details of the workload that
> > > drove you to examine this problem? Alternatively, can you test it and
> >
> > The scenario is simple. All you need is a PSHARED futex.
> >
> > Task A
> > get_futex_key()
> > lock_page()
> >
> > ---> preemption
> >
>
> Ok, so scaling numbers of threads doing something like multiple
> consumers using FUTEX_WAIT and then all being woken should trigger it.
> Should not be that hard to device a test if something in futextest does
> not do it alreayd.
>
> > Now any other task trying to lock that page will have to wait until
> > task A gets scheduled back in, which is an unbound time.
> >
> > It takes quite some time to reproduce, but I'll ask the people who
> > have that workload to give it a try.
> >
>
> Do please. I'd rather not sink time into trying to reproduce a hypothetical
> problem when people who are already familiar with it can provide better
> data. If it stays quiet for too long then I'll either use an existing
> futextest, extend futextest or conclude that the problem was not major
> in the first place if the users cannot be arsed testing a patch.

Took some time, but the folks finally came around to give it a try and
it fixes their problem. I did not explode either, but I doubt, that
their workload can trigger any of the corner cases.

Thanks,

tglx


\
 
 \ /
  Last update: 2014-02-11 17:41    [W:0.489 / U:0.044 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site