lkml.org 
[lkml]   [2021]   [May]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Subject[PATCH 0/3] vmalloc() vs bulk allocator v2
Date
Hi.

There are three patches in this small series. This is a second iteration.
First one was buggy and leaded to kernel panic due to passing NUMA_NO_NODE
as "nid" to the bulk allocator(it assumes to have a valid node).

Therefore the patch number [1] adds an extra helper that guarantees a correct
numa node ID if it is not specified. Basically saying when it is invoked with
NUMA_NO_NODE.

A patch [2] has been slightly updated, the change-log is below.

V1 -> V2:
- Switch to the alloc_pages_bulk_array_node() helper so the NUMA_NO_NODE
is correctly handled(similar to alloc_pages_node() API func).
- Use "while()" loop instead of "for()" for high-order pages and increase
number of allocated pages after actual alloc.

Uladzislau Rezki (Sony) (3):
[1] mm/page_alloc: Add an alloc_pages_bulk_array_node() helper
[2] mm/vmalloc: Switch to bulk allocator in __vmalloc_area_node()
[3] mm/vmalloc: Print a warning message first on failure

include/linux/gfp.h | 9 ++++++
mm/vmalloc.c | 78 +++++++++++++++++++++++++--------------------
2 files changed, 52 insertions(+), 35 deletions(-)

--
2.20.1

\
 
 \ /
  Last update: 2021-05-16 22:23    [W:0.151 / U:0.312 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site