lkml.org 
[lkml]   [2022]   [Jun]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v5 2/2] mm: memory_hotplug: make hugetlb_optimize_vmemmap compatible with memmap_on_memory
On Mon, Jun 20, 2022 at 03:19:33PM +0200, Oscar Salvador wrote:
> On Mon, Jun 20, 2022 at 07:06:16PM +0800, Muchun Song wrote:
> > + /*
> > + * The READ_ONCE() is used to stabilize *pmdp in a register or
> > + * on the stack so that it will stop changing under the code.
> > + * The only concurrent operation where it can be changed is
> > + * split_vmemmap_huge_pmd() (*pmdp will be stable after this
> > + * operation).
> > + */
> > + pmd = READ_ONCE(*pmdp);
> > + if (pmd_leaf(pmd))
> > + vmemmap_page = pmd_page(pmd) + pte_index(vaddr);
> > + else
> > + vmemmap_page = pte_page(*pte_offset_kernel(pmdp, vaddr));
>
> I was about to suggest to get rid of the else branch because on x86_64
> we can only allocate PMD_SIZE chunks when using an alternative allocator,
> meaning that anything which is not a pmd_leaf can't be a PageVmemmapSelfHosted.
>

You are right. However, I think relaying on this condition is fragile and
not straightforward compared to the check of PageVmemmapSelfHosted(). And
the else branch is not in a hot path. So I'd like to stay with it. Does
this make sense for you?

> But then I went to check the other platform that supports memmap_on_memory (arm64),
> and in there I can see that we fallback to populate basepages with altmap should
> we fail to allocate a PMD_SIZE chunk.
>

I think it cannot be fail for memmap_on_memory case.

Thanks.

\
 
 \ /
  Last update: 2022-06-20 16:59    [W:0.058 / U:0.088 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site