lkml.org 
[lkml]   [2023]   [Nov]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] mm: page_alloc: unreserve highatomic page blocks before oom
From


On 11/1/2023 12:16 PM, Pavan Kondeti wrote:
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 2a2536d..41441ced 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1886,7 +1886,9 @@ static void reserve_highatomic_pageblock(struct
>> page *page, struct zone *zone)
>> * Limit the number reserved to 1 pageblock or roughly 1% of a zone.
>> * Check is race-prone but harmless.
>> */
>> - max_managed = (zone_managed_pages(zone) / 100) + pageblock_nr_pages;
>> + max_managed = max_t(unsigned long,
>> + ALIGN(zone_managed_pages(zone) / 100,
>> pageblock_nr_pages),
>> + pageblock_nr_pages);
>> if (zone->nr_reserved_highatomic >= max_managed)
>> return;
>>
> ALIGN() rounds up the value, so max_t() is not needed here. If you had
> used ALIGN_DOWN() then max_t() can be used to keep atleast
> pageblock_nr_pages pages.
>
Yeah, just ALIGN() enough here.
>
> Also, add below Fixes tag if it makes sense.
>
> Fixes: 04c8716f7b00 ("mm: try to exhaust highatomic reserve before the OOM")
I should be adding this.

\
 
 \ /
  Last update: 2023-11-01 07:55    [W:0.076 / U:0.500 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site