lkml.org 
[lkml]   [2023]   [Jan]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH 21/46] hugetlb: use struct hugetlb_pte for walk_hugetlb_range
James,

On Thu, Jan 05, 2023 at 10:18:19AM +0000, James Houghton wrote:
> @@ -751,9 +761,9 @@ static int smaps_hugetlb_range(pte_t *pte, unsigned long hmask,
> int mapcount = page_mapcount(page);
>
> if (mapcount >= 2)
> - mss->shared_hugetlb += huge_page_size(hstate_vma(vma));
> + mss->shared_hugetlb += hugetlb_pte_size(hpte);
> else
> - mss->private_hugetlb += huge_page_size(hstate_vma(vma));
> + mss->private_hugetlb += hugetlb_pte_size(hpte);
> }
> return 0;

One thing interesting I found with hgm right now is mostly everything will
be counted as "shared" here, I think it's because mapcount is accounted
always to the huge page even if mapped in smaller sizes, so page_mapcount()
to a small page should be huge too because the head page mapcount should be
huge. I'm curious the reasons behind the mapcount decision.

For example, would that risk overflow with head_compound_mapcount? One 1G
page mapping all 4K takes 0.25M counts, while the limit should be 2G for
atomic_t. Looks like it's possible.

Btw, are the small page* pointers still needed in the latest HGM design?
Is there code taking care of disabling of hugetlb vmemmap optimization for
HGM? Or maybe it's not needed anymore for the current design?

Thanks,

--
Peter Xu

\
 
 \ /
  Last update: 2023-03-26 23:37    [W:0.411 / U:0.164 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site