lkml.org 
[lkml]   [2012]   [Feb]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH -mm 2/2] mm: do not reset mm->free_area_cache on every single munmap
On Thu, 23 Feb 2012 15:00:34 -0500
Rik van Riel <riel@redhat.com> wrote:

> Some programs have a large number of VMAs, and make frequent calls
> to mmap and munmap. Having munmap constantly cause the search
> pointer for get_unmapped_area to get reset can cause a significant
> slowdown for such programs.
>
> Likewise, starting all the way from the top any time we mmap a small
> VMA can greatly increase the amount of time spent in
> arch_get_unmapped_area_topdown.
>
> For programs with many VMAs, a next-fit algorithm would be fastest,
> however that could waste a lot of virtual address space, and potentially
> page table memory.
>
> A compromise is to reset the search pointer for get_unmapped_area
> after we have unmapped 1/8th of the normal memory in a process.

ick!

> For
> a process with 1000 similar sized VMAs, that means the search pointer
> will only be reset once every 125 or so munmaps. The cost is that
> the program may use about 1/8th more virtual space for these VMAs,
> and up to 1/8th more page tables.
>
> We do not count special mappings, since there are programs that
> use a large fraction of their address space mapping device memory,
> etc.
>
> The benefit is that things scale a lot better, and we remove about
> 200 lines of code.

We've been playing whack-a-mole with this search for many years. What
about developing a proper data structure with which to locate a
suitable-sized hole in O(log(N)) time?



\
 
 \ /
  Last update: 2012-02-23 22:59    [W:0.102 / U:0.036 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site