lkml.org 
[lkml]   [2020]   [Dec]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [External] Re: [PATCH v9 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page
    On Thu, Dec 17, 2020 at 6:08 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
    >
    > On 12/13/20 7:45 AM, Muchun Song wrote:
    > > Every HugeTLB has more than one struct page structure. We __know__ that
    > > we only use the first 4(HUGETLB_CGROUP_MIN_ORDER) struct page structures
    > > to store metadata associated with each HugeTLB.
    > >
    > > There are a lot of struct page structures associated with each HugeTLB
    > > page. For tail pages, the value of compound_head is the same. So we can
    > > reuse first page of tail page structures. We map the virtual addresses
    > > of the remaining pages of tail page structures to the first tail page
    > > struct, and then free these page frames. Therefore, we need to reserve
    > > two pages as vmemmap areas.
    > >
    > > When we allocate a HugeTLB page from the buddy, we can free some vmemmap
    > > pages associated with each HugeTLB page. It is more appropriate to do it
    > > in the prep_new_huge_page().
    > >
    > > The free_vmemmap_pages_per_hpage(), which indicates how many vmemmap
    > > pages associated with a HugeTLB page can be freed, returns zero for
    > > now, which means the feature is disabled. We will enable it once all
    > > the infrastructure is there.
    > >
    > > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
    > > ---
    > > include/linux/bootmem_info.h | 27 +++++-
    > > include/linux/mm.h | 2 +
    > > mm/Makefile | 1 +
    > > mm/hugetlb.c | 3 +
    > > mm/hugetlb_vmemmap.c | 209 +++++++++++++++++++++++++++++++++++++++++++
    > > mm/hugetlb_vmemmap.h | 20 +++++
    > > mm/sparse-vmemmap.c | 170 +++++++++++++++++++++++++++++++++++
    > > 7 files changed, 431 insertions(+), 1 deletion(-)
    > > create mode 100644 mm/hugetlb_vmemmap.c
    > > create mode 100644 mm/hugetlb_vmemmap.h
    >
    > > diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
    > > index 16183d85a7d5..78c527617e8d 100644
    > > --- a/mm/sparse-vmemmap.c
    > > +++ b/mm/sparse-vmemmap.c
    > > @@ -27,8 +27,178 @@
    > > #include <linux/spinlock.h>
    > > #include <linux/vmalloc.h>
    > > #include <linux/sched.h>
    > > +#include <linux/pgtable.h>
    > > +#include <linux/bootmem_info.h>
    > > +
    > > #include <asm/dma.h>
    > > #include <asm/pgalloc.h>
    > > +#include <asm/tlbflush.h>
    > > +
    > > +/*
    > > + * vmemmap_rmap_walk - walk vmemmap page table
    >
    > I am not sure if 'rmap' should be part of these names. rmap today is mostly
    > about reverse mapping lookup. Did you use rmap for 'remap', or because this
    > code is patterned after the page table walking rmap code? Just think the
    > naming could cause some confusion.

    Yeah. I should use "remap" to avoid confusion.

    >
    > > + *
    > > + * @rmap_pte: called for each non-empty PTE (lowest-level) entry.
    > > + * @reuse: the page which is reused for the tail vmemmap pages.
    > > + * @vmemmap_pages: the list head of the vmemmap pages that can be freed.
    > > + */
    > > +struct vmemmap_rmap_walk {
    > > + void (*rmap_pte)(pte_t *pte, unsigned long addr,
    > > + struct vmemmap_rmap_walk *walk);
    > > + struct page *reuse;
    > > + struct list_head *vmemmap_pages;
    > > +};
    > > +
    > > +/*
    > > + * The index of the pte page table which is mapped to the tail of the
    > > + * vmemmap page.
    > > + */
    > > +#define VMEMMAP_TAIL_PAGE_REUSE -1
    >
    > That is the index/offset from the range to be remapped. See comments below.

    You are right. I need to update the comment.

    >
    > > +
    > > +static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr,
    > > + unsigned long end, struct vmemmap_rmap_walk *walk)
    > > +{
    > > + pte_t *pte;
    > > +
    > > + pte = pte_offset_kernel(pmd, addr);
    > > + do {
    > > + BUG_ON(pte_none(*pte));
    > > +
    > > + if (!walk->reuse)
    > > + walk->reuse = pte_page(pte[VMEMMAP_TAIL_PAGE_REUSE]);
    >
    > It may be just me, but I don't like the pte[-1] here. It certainly does work
    > as designed because we want to remap all pages in the range to the page before
    > the range (at offset -1). But, we do not really validate this 'reuse' page.
    > There is the BUG_ON(pte_none(*pte)) as a sanity check, but we do nothing similar
    > for pte[-1]. Based on the usage for HugeTLB pages, we can be confident that
    > pte[-1] is actually a pte. In discussions with Oscar, you mentioned another
    > possible use for these routines.

    Yeah, we should add a BUG_ON for pte[-1].

    >
    > Don't change anything based on my opinion only. I would like to see what
    > others think as well.
    >
    > > +
    > > + if (walk->rmap_pte)
    > > + walk->rmap_pte(pte, addr, walk);
    > > + } while (pte++, addr += PAGE_SIZE, addr != end);
    > > +}
    > > +
    > > +static void vmemmap_pmd_range(pud_t *pud, unsigned long addr,
    > > + unsigned long end, struct vmemmap_rmap_walk *walk)
    > > +{
    > > + pmd_t *pmd;
    > > + unsigned long next;
    > > +
    > > + pmd = pmd_offset(pud, addr);
    > > + do {
    > > + BUG_ON(pmd_none(*pmd));
    > > +
    > > + next = pmd_addr_end(addr, end);
    > > + vmemmap_pte_range(pmd, addr, next, walk);
    > > + } while (pmd++, addr = next, addr != end);
    > > +}
    > > +
    > > +static void vmemmap_pud_range(p4d_t *p4d, unsigned long addr,
    > > + unsigned long end, struct vmemmap_rmap_walk *walk)
    > > +{
    > > + pud_t *pud;
    > > + unsigned long next;
    > > +
    > > + pud = pud_offset(p4d, addr);
    > > + do {
    > > + BUG_ON(pud_none(*pud));
    > > +
    > > + next = pud_addr_end(addr, end);
    > > + vmemmap_pmd_range(pud, addr, next, walk);
    > > + } while (pud++, addr = next, addr != end);
    > > +}
    > > +
    > > +static void vmemmap_p4d_range(pgd_t *pgd, unsigned long addr,
    > > + unsigned long end, struct vmemmap_rmap_walk *walk)
    > > +{
    > > + p4d_t *p4d;
    > > + unsigned long next;
    > > +
    > > + p4d = p4d_offset(pgd, addr);
    > > + do {
    > > + BUG_ON(p4d_none(*p4d));
    > > +
    > > + next = p4d_addr_end(addr, end);
    > > + vmemmap_pud_range(p4d, addr, next, walk);
    > > + } while (p4d++, addr = next, addr != end);
    > > +}
    > > +
    > > +static void vmemmap_remap_range(unsigned long start, unsigned long end,
    > > + struct vmemmap_rmap_walk *walk)
    > > +{
    > > + unsigned long addr = start;
    > > + unsigned long next;
    > > + pgd_t *pgd;
    > > +
    > > + VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
    > > + VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
    > > +
    > > + pgd = pgd_offset_k(addr);
    > > + do {
    > > + BUG_ON(pgd_none(*pgd));
    > > +
    > > + next = pgd_addr_end(addr, end);
    > > + vmemmap_p4d_range(pgd, addr, next, walk);
    > > + } while (pgd++, addr = next, addr != end);
    > > +
    > > + flush_tlb_kernel_range(start, end);
    > > +}
    > > +
    > > +/*
    > > + * Free a vmemmap page. A vmemmap page can be allocated from the memblock
    > > + * allocator or buddy allocator. If the PG_reserved flag is set, it means
    > > + * that it allocated from the memblock allocator, just free it via the
    > > + * free_bootmem_page(). Otherwise, use __free_page().
    > > + */
    > > +static inline void free_vmemmap_page(struct page *page)
    > > +{
    > > + if (PageReserved(page))
    > > + free_bootmem_page(page);
    > > + else
    > > + __free_page(page);
    > > +}
    > > +
    > > +/* Free a list of the vmemmap pages */
    > > +static void free_vmemmap_page_list(struct list_head *list)
    > > +{
    > > + struct page *page, *next;
    > > +
    > > + list_for_each_entry_safe(page, next, list, lru) {
    > > + list_del(&page->lru);
    > > + free_vmemmap_page(page);
    > > + }
    > > +}
    > > +
    > > +static void vmemmap_remap_reuse_pte(pte_t *pte, unsigned long addr,
    > > + struct vmemmap_rmap_walk *walk)
    >
    > See vmemmap_remap_reuse rename suggestion below. I would suggest reuse
    > be dropped from the name here and just be called 'vmemmap_remap_pte'.

    OK. Will do that.

    >
    > > +{
    > > + /*
    > > + * Make the tail pages are mapped with read-only to catch
    > > + * illegal write operation to the tail pages.
    > > + */
    > > + pgprot_t pgprot = PAGE_KERNEL_RO;
    > > + pte_t entry = mk_pte(walk->reuse, pgprot);
    > > + struct page *page;
    > > +
    > > + page = pte_page(*pte);
    > > + list_add(&page->lru, walk->vmemmap_pages);
    > > +
    > > + set_pte_at(&init_mm, addr, pte, entry);
    > > +}
    > > +
    > > +/**
    > > + * vmemmap_remap_reuse - remap the vmemmap virtual address range
    >
    > My original commnet here was:
    >
    > Not sure if the word '_reuse' is best in this function name. To me, the name
    > implies this routine will reuse vmemmap pages. Perhaps, it makes more sense
    > to rename as 'vmemmap_remap_free'? It will first remap, then free vmemmap.

    The vmemmap_remap_free is also a good name to me.
    In the next patch, we can use vmemmap_remap_alloc for
    allocating vmemmap pages. Looks very symmetrical. :-)

    Thanks Mike.

    >
    > But, then I looked at the code above and perhaps you are using the word
    > '_reuse' because the page before the range will be reused? The vmemmap

    Yeah. You are right.

    > page at offset VMEMMAP_TAIL_PAGE_REUSE (-1).
    >
    > > + * [start, start + size) to the page which
    > > + * [start - PAGE_SIZE, start) is mapped.
    > > + * @start: start address of the vmemmap virtual address range
    > > + * @end: size of the vmemmap virtual address range
    >
    > ^^^^ should be @size:

    Oh, Yeah. Forgot to update it. Thanks.

    >
    > --
    > Mike Kravetz
    >
    > > + */
    > > +void vmemmap_remap_reuse(unsigned long start, unsigned long size)
    > > +{
    > > + unsigned long end = start + size;
    > > + LIST_HEAD(vmemmap_pages);
    > > +
    > > + struct vmemmap_rmap_walk walk = {
    > > + .rmap_pte = vmemmap_remap_reuse_pte,
    > > + .vmemmap_pages = &vmemmap_pages,
    > > + };
    > > +
    > > + vmemmap_remap_range(start, end, &walk);
    > > + free_vmemmap_page_list(&vmemmap_pages);
    > > +}
    > >
    > > /*
    > > * Allocate a block of memory to be used to back the virtual memory map
    > >



    --
    Yours,
    Muchun

    \
     
     \ /
      Last update: 2020-12-17 05:08    [W:2.796 / U:1.440 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site