lkml.org 
[lkml]   [2020]   [Dec]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v7 06/15] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two
From
Date
On 30.11.20 16:18, Muchun Song wrote:
> We only can free the tail vmemmap pages of HugeTLB to the buddy allocator
> when the size of struct page is a power of two.
>
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
> mm/hugetlb_vmemmap.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index 51152e258f39..ad8fc61ea273 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -111,6 +111,11 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
> unsigned int nr_pages = pages_per_huge_page(h);
> unsigned int vmemmap_pages;
>
> + if (!is_power_of_2(sizeof(struct page))) {
> + pr_info("disable freeing vmemmap pages for %s\n", h->name);

I'd just drop that pr_info(). Users are able to observe that it's
working (below), so they are able to identify that it's not working as well.

> + return;
> + }
> +
> vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT;
> /*
> * The head page and the first tail page are not to be freed to buddy
>

Please squash this patch into the enabling patch and add a comment
instead, like

/* We cannot optimize if a "struct page" crosses page boundaries. */

--
Thanks,

David / dhildenb

\
 
 \ /
  Last update: 2020-12-09 11:01    [W:0.213 / U:0.456 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site