lkml.org 
[lkml]   [2020]   [Jun]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [PATCH 2/3] arm64: mm: reserve hugetlb CMA after numa_init
Date


> -----Original Message-----
> From: Matthias Brugger [mailto:matthias.bgg@gmail.com]
> Sent: Monday, June 8, 2020 8:15 AM
> To: Roman Gushchin <guro@fb.com>; Song Bao Hua (Barry Song)
> <song.bao.hua@hisilicon.com>
> Cc: catalin.marinas@arm.com; John Garry <john.garry@huawei.com>;
> linux-kernel@vger.kernel.org; Linuxarm <linuxarm@huawei.com>;
> iommu@lists.linux-foundation.org; Zengtao (B) <prime.zeng@hisilicon.com>;
> Jonathan Cameron <jonathan.cameron@huawei.com>;
> robin.murphy@arm.com; hch@lst.de; linux-arm-kernel@lists.infradead.org;
> m.szyprowski@samsung.com
> Subject: Re: [PATCH 2/3] arm64: mm: reserve hugetlb CMA after numa_init
>
>
>
> On 03/06/2020 05:22, Roman Gushchin wrote:
> > On Wed, Jun 03, 2020 at 02:42:30PM +1200, Barry Song wrote:
> >> hugetlb_cma_reserve() is called at the wrong place. numa_init has not been
> >> done yet. so all reserved memory will be located at node0.
> >>
> >> Cc: Roman Gushchin <guro@fb.com>
> >> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> >
> > Acked-by: Roman Gushchin <guro@fb.com>
> >
>
> When did this break or was it broken since the beginning?
> In any case, could you provide a "Fixes" tag for it, so that it can easily be
> backported to older releases.

I guess it was broken at the first beginning.
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=cf11e85fc08cc

Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma")

Would you think it is better for me to send v2 for this patch separately with this tag and take this out of my original patch set for per-numa CMA?
Please give your suggestion.

Best Regards
Barry

>
> Regards,
> Matthias
>
> > Thanks!
> >
> >> ---
> >> arch/arm64/mm/init.c | 10 +++++-----
> >> 1 file changed, 5 insertions(+), 5 deletions(-)
> >>
> >> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> >> index e42727e3568e..8f0e70ebb49d 100644
> >> --- a/arch/arm64/mm/init.c
> >> +++ b/arch/arm64/mm/init.c
> >> @@ -458,11 +458,6 @@ void __init arm64_memblock_init(void)
> >> high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
> >>
> >> dma_contiguous_reserve(arm64_dma32_phys_limit);
> >> -
> >> -#ifdef CONFIG_ARM64_4K_PAGES
> >> - hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
> >> -#endif
> >> -
> >> }
> >>
> >> void __init bootmem_init(void)
> >> @@ -478,6 +473,11 @@ void __init bootmem_init(void)
> >> min_low_pfn = min;
> >>
> >> arm64_numa_init();
> >> +
> >> +#ifdef CONFIG_ARM64_4K_PAGES
> >> + hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
> >> +#endif
> >> +
> >> /*
> >> * Sparsemem tries to allocate bootmem in memory_present(), so must
> be
> >> * done after the fixed reservations.
> >> --
> >> 2.23.0

\
 
 \ /
  Last update: 2020-06-08 02:51    [W:0.108 / U:1.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site