lkml.org 
[lkml]   [2013]   [Aug]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH v2 0/4] mm: reclaim zbud pages on migration and compaction
On 08/11/2013 07:25 PM, Minchan Kim wrote:
> +int set_pinned_page(struct pin_page_owner *owner,
> + struct page *page, void *private)
> +{
> + struct pin_page_info *pinfo = kmalloc(sizeof(pinfo), GFP_KERNEL);
> +
> + INIT_HLIST_NODE(&pinfo->hlist);
> + pinfo->owner = owner;
> +
> + pinfo->pfn = page_to_pfn(page);
> + pinfo->private = private;
> +
> + spin_lock(&hash_lock);
> + hash_add(pin_page_hash, &pinfo->hlist, pinfo->pfn);
> + spin_unlock(&hash_lock);
> +
> + SetPinnedPage(page);
> + return 0;
> +};

I definitely agree that we're getting to the point where we need to look
at this more generically. We've got at least four use-cases that have a
need for deterministically relocating memory:

1. CMA (many sub use cases)
2. Memory hot-remove
3. Memory power management
4. Runtime hugetlb-GB page allocations

Whatever we do, it _should_ be good enough to largely let us replace
PG_slab with this new bit.


\
 
 \ /
  Last update: 2013-08-12 19:21    [W:2.091 / U:0.664 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site