lkml.org 
[lkml]   [2013]   [Mar]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: mmap vs fs cache
Chris Friesen wrote:
> On 03/08/2013 03:40 AM, Howard Chu wrote:
>
>> There is no way that a process that is accessing only 30GB of a mmap
>> should be able to fill up 32GB of RAM. There's nothing else running on
>> the machine, I've killed or suspended everything else in userland
>> besides a couple shells running top and vmstat. When I manually
>> drop_caches repeatedly, then eventually slapd RSS/SHR grows to 30GB and
>> the physical I/O stops.
>
> Is it possible that the kernel is doing some sort of automatic
> readahead, but it ends up reading pages corresponding to data that isn't
> ever queried and so doesn't get mapped by the application?

Yes, that's what I was thinking. I added a posix_madvise(..POSIX_MADV_RANDOM)
but that had no effect on the test.

First obvious conclusion - kswapd is being too aggressive. When free memory
hits the low watermark, the reclaim shrinks slapd down from 25GB to 18-19GB,
while the page cache still contains ~7GB of unmapped pages. Ideally I'd like a
tuning knob so I can say to keep no more than 2GB of unmapped pages in the
cache. (And the desired effect of that would be to allow user processes to
grow to 30GB total, in this case.)

I mentioned this "unmapped page cache control" post already
http://lwn.net/Articles/436010/ but it seems that the idea was ultimately
rejected. Is there anything else similar in current kernels?

--
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/


\
 
 \ /
  Last update: 2013-03-08 16:43    [W:0.090 / U:0.888 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site