lkml.org 
[lkml]   [2016]   [Oct]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v2 4/4] mm: make unreserve highatomic functions reliable
From
Date
On 10/12/2016 07:33 AM, Minchan Kim wrote:
> Currently, unreserve_highatomic_pageblock bails out if it found
> highatomic pageblock regardless of really moving free pages
> from the one so that it could mitigate unreserve logic's goal
> which saves OOM of a process.
>
> This patch makes unreserve functions bail out only if it moves
> some pages out of !highatomic free list to avoid such false
> positive.
>
> Another potential problem is that by race between page freeing and
> reserve highatomic function, pages could be in highatomic free list
> even though the pageblock is !high atomic migratetype. In that case,
> unreserve_highatomic_pageblock can be void if count of highatomic
> reserve is less than pageblock_nr_pages. We could solve it simply
> via draining all of reserved pages before the OOM. It would have
> a safeguard role to exhuast reserved pages before converging to OOM.
>
> Signed-off-by: Michal Hocko <mhocko@suse.com>

Ah, I think that the first S-o-b has to match "From:" to be valid chain (also
for 3/4).

> Signed-off-by: Minchan Kim <minchan@kernel.org>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
> mm/page_alloc.c | 24 +++++++++++++++++-------
> 1 file changed, 17 insertions(+), 7 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index a7472426663f..565589eae6a2 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2079,8 +2079,12 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
> * potentially hurts the reliability of high-order allocations when under
> * intense memory pressure but failed atomic allocations should be easier
> * to recover from than an OOM.
> + *
> + * If @drain is true, try to move all of reserved pages out of highatomic
> + * free list.
> */
> -static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
> +static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
> + bool drain)
> {
> struct zonelist *zonelist = ac->zonelist;
> unsigned long flags;
> @@ -2092,8 +2096,12 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
>
> for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
> ac->nodemask) {
> - /* Preserve at least one pageblock */
> - if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
> + /*
> + * Preserve at least one pageblock unless memory pressure
> + * is really high.
> + */
> + if (!drain && zone->nr_reserved_highatomic <=
> + pageblock_nr_pages)
> continue;
>
> spin_lock_irqsave(&zone->lock, flags);
> @@ -2138,8 +2146,10 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
> */
> set_pageblock_migratetype(page, ac->migratetype);
> ret = move_freepages_block(zone, page, ac->migratetype);
> - spin_unlock_irqrestore(&zone->lock, flags);
> - return ret;
> + if (!drain && ret) {
> + spin_unlock_irqrestore(&zone->lock, flags);
> + return ret;
> + }
> }
> spin_unlock_irqrestore(&zone->lock, flags);
> }
> @@ -3343,7 +3353,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
> * Shrink them them and try again
> */
> if (!page && !drained) {
> - unreserve_highatomic_pageblock(ac);
> + unreserve_highatomic_pageblock(ac, false);
> drain_all_pages(NULL);
> drained = true;
> goto retry;
> @@ -3462,7 +3472,7 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
> */
> if (*no_progress_loops > MAX_RECLAIM_RETRIES) {
> /* Before OOM, exhaust highatomic_reserve */
> - if (unreserve_highatomic_pageblock(ac))
> + if (unreserve_highatomic_pageblock(ac, true))
> return true;
> return false;
> }
>

\
 
 \ /
  Last update: 2016-10-12 09:20    [W:0.145 / U:0.200 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site