lkml.org 
[lkml]   [2020]   [Nov]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [External] Re: [PATCH v3 09/21] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
On Tue, Nov 10, 2020 at 06:47:08PM +0800, Muchun Song wrote:
> > That only refers to gigantic pages, right?
>
> Yeah, now it only refers to gigantic pages. Originally, I also wanted to merge
> vmemmap PTE to PMD for normal 2MB HugeTLB pages. So I introduced
> those macros(e.g. freed_vmemmap_hpage). For 2MB HugeTLB pages, I
> haven't found an elegant solution. Hopefully, when you or someone have
> read all of the patch series, we can come up with an elegant solution to
> merge PTE.

Well, it is quite a lot of "tricky" code, so it takes some time.

> > > > > +static void free_huge_page_vmemmap(struct hstate *h, struct page *head)
> > > > > +{
> > > > > + pmd_t *pmd;
> > > > > + spinlock_t *ptl;
> > > > > + LIST_HEAD(free_pages);
> > > > > +
> > > > > + if (!free_vmemmap_pages_per_hpage(h))
> > > > > + return;
> > > > > +
> > > > > + pmd = vmemmap_to_pmd(head);
> > > > > + ptl = vmemmap_pmd_lock(pmd);

I forgot about this one.
You might want to check whether vmemmap_to_pmd returns NULL or not.
If it does means that something went wrong anyways, but still we should handle
such case (and print a fat warning or something like that).


--
Oscar Salvador
SUSE L3

\
 
 \ /
  Last update: 2020-11-10 14:53    [W:0.068 / U:0.540 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site