Messages in this thread | | | Subject | Re: [PATCH v2 4/4] hugetlbfs: don't retry when pool page allocations start to fail | From | Vlastimil Babka <> | Date | Tue, 6 Aug 2019 10:03:42 +0200 |
| |
On 8/6/19 3:47 AM, Mike Kravetz wrote: > When allocating hugetlbfs pool pages via /proc/sys/vm/nr_hugepages, > the pages will be interleaved between all nodes of the system. If > nodes are not equal, it is quite possible for one node to fill up > before the others. When this happens, the code still attempts to > allocate pages from the full node. This results in calls to direct > reclaim and compaction which slow things down considerably. > > When allocating pool pages, note the state of the previous allocation > for each node. If previous allocation failed, do not use the > aggressive retry algorithm on successive attempts. The allocation > will still succeed if there is memory available, but it will not try > as hard to free up memory. > > Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Thanks.
| |