lkml.org 
[lkml]   [2020]   [Nov]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v3 09/21] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
    On Sun, Nov 08, 2020 at 10:11:01PM +0800, Muchun Song wrote:
    > +static inline int freed_vmemmap_hpage(struct page *page)
    > +{
    > + return atomic_read(&page->_mapcount) + 1;
    > +}
    > +
    > +static inline int freed_vmemmap_hpage_inc(struct page *page)
    > +{
    > + return atomic_inc_return_relaxed(&page->_mapcount) + 1;
    > +}
    > +
    > +static inline int freed_vmemmap_hpage_dec(struct page *page)
    > +{
    > + return atomic_dec_return_relaxed(&page->_mapcount) + 1;
    > +}

    Are these relaxed any different that the normal ones on x86_64?
    I got confused following the macros.

    > +static void __free_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep,
    > + unsigned long start,
    > + unsigned int nr_free,
    > + struct list_head *free_pages)
    > +{
    > + /* Make the tail pages are mapped read-only. */
    > + pgprot_t pgprot = PAGE_KERNEL_RO;
    > + pte_t entry = mk_pte(reuse, pgprot);
    > + unsigned long addr;
    > + unsigned long end = start + (nr_free << PAGE_SHIFT);

    See below.

    > +static void __free_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd,
    > + unsigned long addr,
    > + struct list_head *free_pages)
    > +{
    > + unsigned long next;
    > + unsigned long start = addr + RESERVE_VMEMMAP_NR * PAGE_SIZE;
    > + unsigned long end = addr + vmemmap_pages_size_per_hpage(h);
    > + struct page *reuse = NULL;
    > +
    > + addr = start;
    > + do {
    > + unsigned int nr_pages;
    > + pte_t *ptep;
    > +
    > + ptep = pte_offset_kernel(pmd, addr);
    > + if (!reuse)
    > + reuse = pte_page(ptep[-1]);

    Can we define a proper name for that instead of -1?

    e.g: TAIL_PAGE_REUSE or something like that.

    > +
    > + next = vmemmap_hpage_addr_end(addr, end);
    > + nr_pages = (next - addr) >> PAGE_SHIFT;
    > + __free_huge_page_pte_vmemmap(reuse, ptep, addr, nr_pages,
    > + free_pages);

    Why not passing next instead of nr_pages? I think it makes more sense.
    As a bonus we can kill the variable.

    > +static void split_vmemmap_huge_page(struct hstate *h, struct page *head,
    > + pmd_t *pmd)
    > +{
    > + pgtable_t pgtable;
    > + unsigned long start = (unsigned long)head & VMEMMAP_HPAGE_MASK;
    > + unsigned long addr = start;
    > + unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h);
    > +
    > + while (nr-- && (pgtable = vmemmap_pgtable_withdraw(head))) {

    The same with previous patches, I would scrap "nr" and its use.

    > + VM_BUG_ON(freed_vmemmap_hpage(pgtable));

    I guess here we want to check whether we already call free_huge_page_vmemmap
    on this range?
    For this to have happened, the locking should have failed, right?

    > +static void free_huge_page_vmemmap(struct hstate *h, struct page *head)
    > +{
    > + pmd_t *pmd;
    > + spinlock_t *ptl;
    > + LIST_HEAD(free_pages);
    > +
    > + if (!free_vmemmap_pages_per_hpage(h))
    > + return;
    > +
    > + pmd = vmemmap_to_pmd(head);
    > + ptl = vmemmap_pmd_lock(pmd);
    > + if (vmemmap_pmd_huge(pmd)) {
    > + VM_BUG_ON(!pgtable_pages_to_prealloc_per_hpage(h));

    I think that checking for free_vmemmap_pages_per_hpage is enough.
    In the end, pgtable_pages_to_prealloc_per_hpage uses free_vmemmap_pages_per_hpage.


    --
    Oscar Salvador
    SUSE L3

    \
     
     \ /
      Last update: 2020-11-09 19:52    [W:4.070 / U:0.260 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site