lkml.org 
[lkml]   [2012]   [Mar]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH -mm 2/2] mm: do not reset mm->free_area_cache on every single munmap
On 02/23/2012 04:56 PM, Andrew Morton wrote:

> We've been playing whack-a-mole with this search for many years. What
> about developing a proper data structure with which to locate a
> suitable-sized hole in O(log(N)) time?

I got around to looking at this, and the more I look, the
worse things get. The obvious (and probably highest
reasonable complexity) solution looks like this:

struct free_area {
unsigned long address;
struct rb_node rb_addr;
unsigned long size;
struct rb_node rb_size;
};

This works in a fairly obvious way for normal mmap
and munmap calls, inserting the free area into the tree
at the desired location, or expanding one that is already
there.

However, it totally falls apart when we need to get
aligned areas, for eg. hugetlb or cache coloring on
architectures with virtually indexed caches.

For those kinds of allocations, we are back to tree
walking just like today, giving us a fairly large amount
of additional complexity for no obvious gain.

Is this really the path we want to go down?

--
All rights reversed


\
 
 \ /
  Last update: 2012-03-20 19:35    [W:0.267 / U:0.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site