lkml.org 
[lkml]   [2020]   [Mar]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] mm,page_alloc,cma: conditionally prefer cma pageblocks for movable allocations
From
Date
On 3/11/20 11:58 PM, Roman Gushchin wrote:
>>
>> I agree it should be in the noise. But please do put it behind CONFIG_CMA
>> #ifdef. My x86_64 desktop distro kernel doesn't have CONFIG_CMA. Even if this is
>> effectively no-op with __rmqueue_cma_fallback() returning NULL immediately, I
>> think the compiler cannot eliminate the two zone_page_state()'s which are
>> atomic_long_read(), even if it's just ultimately READ_ONCE() here, that's a
>> volatile cast which means elimination not possible AFAIK? Other architectures
>> might be even more involved.
>
> I agree.
>
> Andrew,
> can you, please, squash the following diff into the patch?

Thanks,

then please add to the result

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> Thank you!
>
> --
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 7d9067b75dcb..bc65931b3901 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2767,6 +2767,7 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype,
> {
> struct page *page;
>
> +#ifdef CONFIG_CMA
> /*
> * Balance movable allocations between regular and CMA areas by
> * allocating from CMA when over half of the zone's free memory
> @@ -2779,6 +2780,7 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype,
> if (page)
> return page;
> }
> +#endif
> retry:
> page = __rmqueue_smallest(zone, order, migratetype);
> if (unlikely(!page)) {
>
>

\
 
 \ /
  Last update: 2020-03-12 00:04    [W:0.624 / U:0.124 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site