lkml.org 
[lkml]   [2021]   [Jul]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 3/5] mm: hugetlb: free the 2nd vmemmap page associated with each HugeTLB page
From
Date
On 7/26/21 2:16 PM, Matthew Wilcox wrote:
> On Wed, Jul 14, 2021 at 05:17:58PM +0800, Muchun Song wrote:
>> +static __always_inline struct page *page_head_if_fake(const struct page *page)
>> +{
>> + if (!hugetlb_free_vmemmap_enabled)
>> + return NULL;
>> +
>> + /*
>> + * Only addresses aligned with PAGE_SIZE of struct page may be fake head
>> + * struct page. The alignment check aims to avoid access the fields (
>> + * e.g. compound_head) of the @page[1]. It can avoid touch a (possibly)
>> + * cold cacheline in some cases.
>> + */
>> + if (IS_ALIGNED((unsigned long)page, PAGE_SIZE) &&
>> + test_bit(PG_head, &page->flags)) {
>> + unsigned long head = READ_ONCE(page[1].compound_head);
>> +
>> + if (likely(head & 1))
>> + return (struct page *)(head - 1);
>> + }
>> +
>> + return NULL;
>> +}
>
> Why return 'NULL' instead of 'page'?
>
> This is going to significantly increase the cost of calling
> compound_page() (by whichever spelling it has). That will make
> the folio patchset more compelling ;-)

Matthew, any suggestions for benchmarks/workloads to measure the
increased overhead? Suspect you have some ideas based on folio work.

My concern is that we are introducing overhead for code paths not
associated with this feature. The next patch even tries to minimize
this overhead a bit if hugetlb_free_vmemmap_enabled is not set.
--
Mike Kravetz

\
 
 \ /
  Last update: 2021-07-27 01:57    [W:0.054 / U:0.052 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site