lkml.org 
[lkml]   [2020]   [Nov]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v4 05/21] mm/hugetlb: Introduce pgtable allocation/freeing helpers
On Fri, Nov 13, 2020 at 06:59:36PM +0800, Muchun Song wrote:
> +#define page_huge_pte(page) ((page)->pmd_huge_pte)

Seems you do not need this one anymore.

> +void vmemmap_pgtable_free(struct page *page)
> +{
> + struct page *pte_page, *t_page;
> +
> + list_for_each_entry_safe(pte_page, t_page, &page->lru, lru) {
> + list_del(&pte_page->lru);
> + pte_free_kernel(&init_mm, page_to_virt(pte_page));
> + }
> +}
> +
> +int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
> +{
> + unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h);
> +
> + /* Store preallocated pages on huge page lru list */
> + INIT_LIST_HEAD(&page->lru);
> +
> + while (nr--) {
> + pte_t *pte_p;
> +
> + pte_p = pte_alloc_one_kernel(&init_mm);
> + if (!pte_p)
> + goto out;
> + list_add(&virt_to_page(pte_p)->lru, &page->lru);
> + }

Definetely this looks better and easier to handle.
Btw, did you explore Matthew's hint about instead of allocating a new page,
using one of the ones you are going to free to store the ptes?
I am not sure whether it is feasible at all though.


> --- a/mm/hugetlb_vmemmap.h
> +++ b/mm/hugetlb_vmemmap.h
> @@ -9,12 +9,24 @@
> #ifndef _LINUX_HUGETLB_VMEMMAP_H
> #define _LINUX_HUGETLB_VMEMMAP_H
> #include <linux/hugetlb.h>
> +#include <linux/mm.h>

why do we need this here?

--
Oscar Salvador
SUSE L3

\
 
 \ /
  Last update: 2020-11-17 16:07    [W:0.239 / U:2.128 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site