lkml.org 
[lkml]   [2008]   [Nov]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC v1][PATCH]page_fault retry with NOPAGE_RETRY
Peter Zijlstra wrote:
> On Thu, 2008-11-27 at 01:28 -0800, Mike Waychison wrote:
>
>> Correct. I don't recall the numbers from the pathelogical cases we were
>> seeing, but iirc, it was on the order of 10s of seconds, likely
>> exascerbated by slower than usual disks. I've been digging through my
>> inbox to find numbers without much success -- we've been using a variant
>> of this patch since 2.6.11.
>
>> We generally try to avoid such things, but sometimes it a) can't be
>> easily avoided (third party libraries for instance) and b) when it hits
>> us, it affects the overall health of the machine/cluster (the monitoring
>> daemons get blocked, which isn't very healthy).
>
> If its only monitoring, there might be another solution. If you can keep
> the required data in a separate (approximate) copy so that you don't
> need mmap_sem at all to show them.
>
> If your mmap_sem is so contended your latencies are unacceptable, adding
> more users to it - even statistics gathering, just isn't going to cure
> the situation.
>
> Furthermore, /proc code usually isn't written with performance in mind,
> so its usually simple and robust code. Adding it to a 'hot'-path like
> you're doing doesn't seem advisable.
>
> Also, releasing and re-acquiring mmap_sem can significantly add to the
> cacheline bouncing that thing already has.
>

This is much less of a worry. We expect to be able to look at these
things on the order of 1HZ, so cacheline bouncing becomes negligible.

Latency to lock acquire however hurts and is silly considering it's just
another reader. Our monitoring software here is acting as a litmus test
and the real pain is felt by other threads in the same process who are
also blocked trying to acquire the read lock.


\
 
 \ /
  Last update: 2008-11-27 20:15    [W:0.208 / U:0.388 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site