lkml.org 
[lkml]   [2020]   [Oct]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RFC PATCH 00/24] mm/hugetlb: Free some vmemmap pages of hugetlb page
From
Date
On 9/29/20 2:58 PM, Mike Kravetz wrote:
> On 9/15/20 5:59 AM, Muchun Song wrote:
>> Hi all,
>>
>> This patch series will free some vmemmap pages(struct page structures)
>> associated with each hugetlbpage when preallocated to save memory.
> ...
>> The mapping of the first page(index 0) and the second page(index 1) is
>> unchanged. The remaining 6 pages are all mapped to the same page(index
>> 1). So we only need 2 pages for vmemmap area and free 6 pages to the
>> buddy system to save memory. Why we can do this? Because the content
>> of the remaining 7 pages are usually same except the first page.
>>
>> When a hugetlbpage is freed to the buddy system, we should allocate 6
>> pages for vmemmap pages and restore the previous mapping relationship.
>>
>> If we uses the 1G hugetlbpage, we can save 4095 pages. This is a very
>> substantial gain. On our server, run some SPDK applications which will
>> use 300GB hugetlbpage. With this feature enabled, we can save 4797MB
>> memory.

I had a hard time going through the patch series as it is currently
structured, and instead examined all the code together. Muchun put in
much effort and the code does reduce memory usage.
- For 2MB hugetlb pages, we save 5 pages of struct pages
- For 1GB hugetlb pages, we save 4086 pages of struct pages

Code is even in pace to handle poisoned pages, although I have not looked
at this closely. The code survives the libhugetlbfs and ltp huge page tests.

To date, nobody has asked the important question "Is the added complexity
worth the memory savings?". I suppose it all depends on one's use case.
Obviously, the savings are more significant when one uses 1G huge pages but
that may not be the common case today.

> At a high level this seems like a reasonable optimization for hugetlb
> pages. It is possible because hugetlb pages are 'special' and mostly
> handled differently than pages in normal mm paths.

Such an optimization only makes sense for something like hugetlb pages. One
reason is the 'special' nature of hugetlbfs as stated above. The other is
that this optimization mostly makes sense for huge pages that are created
once and stick around for a long time. hugetlb pool pages are a perfect
example. This is because manipulation of struct page mappings is done when
a huge page is created or destroyed.

> The majority of the new code is hugetlb specific, so it should not be
> of too much concern for the general mm code paths.

It is true that much of the code in this series was put in hugetlb.c. However,
I would argue that there is a bunch of code that only deals with remapping
the memmap which should more generic and added to sparse-vmemmap.c. This
would at least allow for easier reuse.

Before Muchun and myself put more effort into this series, I would really
like to get feedback on the whether or not this should move forward.
Specifically, is the memory savings worth added complexity? Is the removing
of struct pages going to come back and cause issues for future features?
--
Mike Kravetz

\
 
 \ /
  Last update: 2020-10-07 23:16    [W:0.209 / U:0.024 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site