lkml.org 
[lkml]   [2021]   [Jun]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm: delete duplicate order checking, when stealing whole pageblock
On Fri, 11 Jun 2021 14:38:34 +0800 chengkaitao <pilgrimtao@gmail.com> wrote:

> From: chengkaitao <pilgrimtao@gmail.com>
>
> 1. Already has (order >= pageblock_order / 2) here, we don't neet
> (order >= pageblock_order)
> 2. set function can_steal_fallback to inline
>
> ...
>
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2619,18 +2619,8 @@ static void change_pageblock_range(struct page *pageblock_page,
> * is worse than movable allocations stealing from unmovable and reclaimable
> * pageblocks.
> */
> -static bool can_steal_fallback(unsigned int order, int start_mt)
> +static inline bool can_steal_fallback(unsigned int order, int start_mt)
> {
> - /*
> - * Leaving this order check is intended, although there is
> - * relaxed order check in next check. The reason is that
> - * we can actually steal whole pageblock if this condition met,
> - * but, below check doesn't guarantee it and that is just heuristic
> - * so could be changed anytime.
> - */
> - if (order >= pageblock_order)
> - return true;
> -
> if (order >= pageblock_order / 2 ||
> start_mt == MIGRATE_RECLAIMABLE ||
> start_mt == MIGRATE_UNMOVABLE ||

Well, that redundant check was put there deliberately, as the comment
explains.

The reasoning is perhaps a little dubious, but it seems that the
compiler has optimized away the redundant check anyway (your patch
doesn't alter code size).

\
 
 \ /
  Last update: 2021-06-12 02:01    [W:0.025 / U:0.704 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site