lkml.org 
[lkml]   [2022]   [Jan]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Date
    SubjectRe: [syzbot] kernel BUG in __page_mapcount
    On Wed, Jan 5, 2022 at 11:05 AM Yang Shi <shy828301@gmail.com> wrote:
    >
    > On Tue, Dec 21, 2021 at 10:40 AM Matthew Wilcox <willy@infradead.org> wrote:
    > >
    > > On Tue, Dec 21, 2021 at 10:24:27AM -0800, Yang Shi wrote:
    > > > It seems the THP is split during smaps walk. The reproducer does call
    > > > MADV_FREE on partial THP which may split the huge page.
    > > >
    > > > The below fix (untested) should be able to fix it.
    > >
    > > Did you read the rest of the thread on this? If the page is being
    >
    > I just revisited this. Now I see what you mean about "the rest of the
    > thread". My gmail client doesn't put them in the same thread, sigh...
    >
    > Yeah, try_get_compound_head() seems like the right way.
    >
    > Or we just simply treat migration entries as mapcount == 1 as Kirill
    > suggested or just skip migration entries since they are transient or
    > show migration entries separately.

    I think Kirill's suggestion makes some sense. The migration entry's
    mapcount is 0 so "pss /= mapcount" is not called at all, so the
    migration entry is actually treated like mapcount == 1. This doesn't
    change the behavior. Not like swap entry, we actually can't tell how
    many references for the migration entry.

    But we should handle private device entry differently since its
    mapcount is inc'ed when it is shared between processes. The regular
    migration entry could be identified by is_migration_entry() easily.
    Using try_get_compound_head() seems overkilling IMHO.

    I just came up with the below patch (built and running the producer
    didn't trigger the bug for me so far). If it looks fine, I will submit
    it in a formal patch with more comments.

    diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
    index ad667dbc96f5..6a48bbb51efa 100644
    --- a/fs/proc/task_mmu.c
    +++ b/fs/proc/task_mmu.c
    @@ -429,7 +429,8 @@ static void smaps_page_accumulate(struct
    mem_size_stats *mss,
    }

    static void smaps_account(struct mem_size_stats *mss, struct page *page,
    - bool compound, bool young, bool dirty, bool locked)
    + bool compound, bool young, bool dirty, bool locked,
    + bool migration)
    {
    int i, nr = compound ? compound_nr(page) : 1;
    unsigned long size = nr * PAGE_SIZE;
    @@ -457,7 +458,7 @@ static void smaps_account(struct mem_size_stats
    *mss, struct page *page,
    * If any subpage of the compound page mapped with PTE it would elevate
    * page_count().
    */
    - if (page_count(page) == 1) {
    + if ((page_count(page) == 1) || migration) {
    smaps_page_accumulate(mss, page, size, size << PSS_SHIFT, dirty,
    locked, true);
    return;
    @@ -506,6 +507,7 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
    struct vm_area_struct *vma = walk->vma;
    bool locked = !!(vma->vm_flags & VM_LOCKED);
    struct page *page = NULL;
    + bool migration = false;

    if (pte_present(*pte)) {
    page = vm_normal_page(vma, addr, *pte);
    @@ -525,8 +527,11 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
    } else {
    mss->swap_pss += (u64)PAGE_SIZE << PSS_SHIFT;
    }
    - } else if (is_pfn_swap_entry(swpent))
    + } else if (is_pfn_swap_entry(swpent)) {
    + if (is_migration_entry(swpent))
    + migration = true;
    page = pfn_swap_entry_to_page(swpent);
    + }
    } else {
    smaps_pte_hole_lookup(addr, walk);
    return;
    @@ -535,7 +540,8 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
    if (!page)
    return;

    - smaps_account(mss, page, false, pte_young(*pte),
    pte_dirty(*pte), locked);
    + smaps_account(mss, page, false, pte_young(*pte), pte_dirty(*pte),
    + locked, migration);
    }

    #ifdef CONFIG_TRANSPARENT_HUGEPAGE
    @@ -546,6 +552,7 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
    struct vm_area_struct *vma = walk->vma;
    bool locked = !!(vma->vm_flags & VM_LOCKED);
    struct page *page = NULL;
    + bool migration = false;

    if (pmd_present(*pmd)) {
    /* FOLL_DUMP will return -EFAULT on huge zero page */
    @@ -553,8 +560,10 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
    } else if (unlikely(thp_migration_supported() && is_swap_pmd(*pmd))) {
    swp_entry_t entry = pmd_to_swp_entry(*pmd);

    - if (is_migration_entry(entry))
    + if (is_migration_entry(entry)) {
    + migration = true;
    page = pfn_swap_entry_to_page(entry);
    + }
    }
    if (IS_ERR_OR_NULL(page))
    return;
    @@ -566,7 +575,9 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
    /* pass */;
    else
    mss->file_thp += HPAGE_PMD_SIZE;
    - smaps_account(mss, page, true, pmd_young(*pmd),
    pmd_dirty(*pmd), locked);
    +
    + smaps_account(mss, page, true, pmd_young(*pmd), pmd_dirty(*pmd),
    + locked, migration);
    }
    #else
    static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
    >
    >
    > > migrated, we should still account it ... also, you've changed the
    > > refcount, so this:
    > >
    > > if (page_count(page) == 1) {
    > > smaps_page_accumulate(mss, page, size, size << PSS_SHIFT, dirty,
    > > locked, true);
    > > return;
    > > }
    > >
    > > will never trigger.

    \
     
     \ /
      Last update: 2022-01-12 00:15    [W:4.087 / U:0.712 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site