lkml.org 
[lkml]   [2021]   [Jul]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 3/5] mm: hugetlb: free the 2nd vmemmap page associated with each HugeTLB page
On Tue, Jul 27, 2021 at 5:17 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, Jul 14, 2021 at 05:17:58PM +0800, Muchun Song wrote:
> > +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> > +extern bool hugetlb_free_vmemmap_enabled;
> > +
> > +/*
> > + * If the feature of freeing some vmemmap pages associated with each HugeTLB
> > + * page is enabled, the head vmemmap page frame is reused and all of the tail
> > + * vmemmap addresses map to the head vmemmap page frame (furture details can
> > + * refer to the figure at the head of the mm/hugetlb_vmemmap.c). In other
> > + * word, there are more than one page struct with PG_head associated with each
> > + * HugeTLB page. We __know__ that there is only one head page struct, the tail
> > + * page structs with PG_head are fake head page structs. We need an approach
> > + * to distinguish between those two different types of page structs so that
> > + * compound_head() can return the real head page struct when the parameter is
> > + * the tail page struct but with PG_head. This is what page_head_if_fake()
> > + * does.
> > + *
> > + * The page_head_if_fake() returns the real head page struct iff the @page may
> > + * be fake, otherwise, returns NULL if the @page cannot be a fake page struct.
> > + * The following figure describes how to distinguish between real and fake head
> > + * page struct.
> > + *
> > + * if (test_bit(PG_head, &page->flags)) {
> > + * unsigned long head = READ_ONCE(page[1].compound_head);
> > + *
> > + * if (head & 1) {
> > + * if (head == (unsigned long)page + 1)
> > + * ==> head page struct
> > + * else
> > + * ==> tail page struct
> > + * } else
> > + * ==> head page struct
> > + * } else
> > + * ==> cannot be fake head page struct
>
> I'm not sure we need the pseudocode when the code is right there ...

Maybe it is redundant. I'll remove this in the next version.

>
> > + * We can safely access the field of the @page[1] with PG_head because it means
> > + * that the @page is a compound page composed with at least two contiguous
> > + * pages.
> > + */
> > +static __always_inline struct page *page_head_if_fake(const struct page *page)
> > +{
> > + if (!hugetlb_free_vmemmap_enabled)
> > + return NULL;
> > +
> > + /*
> > + * Only addresses aligned with PAGE_SIZE of struct page may be fake head
> > + * struct page. The alignment check aims to avoid access the fields (
> > + * e.g. compound_head) of the @page[1]. It can avoid touch a (possibly)
> > + * cold cacheline in some cases.
> > + */
> > + if (IS_ALIGNED((unsigned long)page, PAGE_SIZE) &&
> > + test_bit(PG_head, &page->flags)) {
> > + unsigned long head = READ_ONCE(page[1].compound_head);
> > +
> > + if (likely(head & 1))
> > + return (struct page *)(head - 1);
> > + }
> > +
> > + return NULL;
> > +}
>
> Why return 'NULL' instead of 'page'?

Returning @page is also fine. Will do in the next version.

>
> This is going to significantly increase the cost of calling
> compound_page() (by whichever spelling it has). That will make
> the folio patchset more compelling ;-)

As Mike mentationed, do you have any recommended
benchmark (suspect you have a lot of experience on
this)?

Thanks.

\
 
 \ /
  Last update: 2021-07-27 09:17    [W:0.100 / U:0.276 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site