lkml.org 
[lkml]   [2021]   [Feb]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [External] Re: [PATCH v15 4/8] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
    On Mon 15-02-21 20:00:07, Muchun Song wrote:
    > On Mon, Feb 15, 2021 at 7:51 PM Muchun Song <songmuchun@bytedance.com> wrote:
    > >
    > > On Mon, Feb 15, 2021 at 6:33 PM Michal Hocko <mhocko@suse.com> wrote:
    > > >
    > > > On Mon 15-02-21 18:05:06, Muchun Song wrote:
    > > > > On Fri, Feb 12, 2021 at 11:32 PM Michal Hocko <mhocko@suse.com> wrote:
    > > > [...]
    > > > > > > +int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
    > > > > > > +{
    > > > > > > + int ret;
    > > > > > > + unsigned long vmemmap_addr = (unsigned long)head;
    > > > > > > + unsigned long vmemmap_end, vmemmap_reuse;
    > > > > > > +
    > > > > > > + if (!free_vmemmap_pages_per_hpage(h))
    > > > > > > + return 0;
    > > > > > > +
    > > > > > > + vmemmap_addr += RESERVE_VMEMMAP_SIZE;
    > > > > > > + vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h);
    > > > > > > + vmemmap_reuse = vmemmap_addr - PAGE_SIZE;
    > > > > > > +
    > > > > > > + /*
    > > > > > > + * The pages which the vmemmap virtual address range [@vmemmap_addr,
    > > > > > > + * @vmemmap_end) are mapped to are freed to the buddy allocator, and
    > > > > > > + * the range is mapped to the page which @vmemmap_reuse is mapped to.
    > > > > > > + * When a HugeTLB page is freed to the buddy allocator, previously
    > > > > > > + * discarded vmemmap pages must be allocated and remapping.
    > > > > > > + */
    > > > > > > + ret = vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse,
    > > > > > > + GFP_ATOMIC | __GFP_NOWARN | __GFP_THISNODE);
    > > > > >
    > > > > > I do not think that this is a good allocation mode. GFP_ATOMIC is a non
    > > > > > sleeping allocation and a medium memory pressure might cause it to
    > > > > > fail prematurely. I do not think this is really an atomic context which
    > > > > > couldn't afford memory reclaim. I also do not think we want to grant
    > > > >
    > > > > Because alloc_huge_page_vmemmap is called under hugetlb_lock
    > > > > now. So using GFP_ATOMIC indeed makes the code more simpler.
    > > >
    > > > You can have a preallocated list of pages prior taking the lock.
    > >
    > > A discussion about this can refer to here:
    > >
    > > https://patchwork.kernel.org/project/linux-mm/patch/20210117151053.24600-5-songmuchun@bytedance.com/
    > >
    > > > Moreover do we want to manipulate vmemmaps from under spinlock in
    > > > general. I have to say I have missed that detail when reviewing. Need to
    > > > think more.
    > > >
    > > > > From the document of the kernel, I learned that __GFP_NOMEMALLOC
    > > > > can be used to explicitly forbid access to emergency reserves. So if
    > > > > we do not want to use the reserve memory. How about replacing it to
    > > > >
    > > > > GFP_ATOMIC | __GFP_NOMEMALLOC | __GFP_NOWARN | __GFP_THISNODE
    > > >
    > > > The whole point of GFP_ATOMIC is to grant access to memory reserves so
    > > > the above is quite dubious. If you do not want access to memory reserves
    > >
    > > Look at the code of gfp_to_alloc_flags().
    > >
    > > static inline unsigned int gfp_to_alloc_flags(gfp_t gfp_mask)
    > > {
    > > [...]
    > > if (gfp_mask & __GFP_ATOMIC) {
    > > /*
    > > * Not worth trying to allocate harder for __GFP_NOMEMALLOC even
    > > * if it can't schedule.
    > > */
    > > if (!(gfp_mask & __GFP_NOMEMALLOC))
    > > alloc_flags |= ALLOC_HARDER;
    > > [...]
    > > }
    > >
    > > Seems to allow this operation (GFP_ATOMIC | __GFP_NOMEMALLOC).

    Please read my response again more carefully. I am not claiming that
    combination is not allowed. I have said it doesn't make any sense in
    this context.

    --
    Michal Hocko
    SUSE Labs

    \
     
     \ /
      Last update: 2021-02-15 13:23    [W:2.678 / U:1.392 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site