lkml.org 
[lkml]   [2021]   [Mar]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    Subjectlinux-next: manual merge of the akpm-current tree with the block tree
    Hi all,

    Today's linux-next merge of the akpm-current tree got a conflict in:

    mm/memcontrol.c

    between commit:

    06d69d4c8669 ("mm: Charge active memcg when no mm is set")

    from the block tree and commit:

    674788258a66 ("memcg: charge before adding to swapcache on swapin")

    from the akpm-current tree.

    I fixed it up (I think - see below) and can carry the fix as necessary.
    This is now fixed as far as linux-next is concerned, but any non trivial
    conflicts should be mentioned to your upstream maintainer when your tree
    is submitted for merging. You may also want to consider cooperating
    with the maintainer of the conflicting tree to minimise any particularly
    complex conflicts.

    --
    Cheers,
    Stephen Rothwell

    diff --cc mm/memcontrol.c
    index f05501669e29,668d1d7c2645..000000000000
    --- a/mm/memcontrol.c
    +++ b/mm/memcontrol.c
    @@@ -6691,65 -6549,73 +6550,80 @@@ out
    * @gfp_mask: reclaim mode
    *
    * Try to charge @page to the memcg that @mm belongs to, reclaiming
    - * pages according to @gfp_mask if necessary.
    + * pages according to @gfp_mask if necessary. if @mm is NULL, try to
    + * charge to the active memcg.
    *
    + * Do not use this for pages allocated for swapin.
    + *
    * Returns 0 on success. Otherwise, an error code is returned.
    */
    int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
    {
    - unsigned int nr_pages = thp_nr_pages(page);
    - struct mem_cgroup *memcg = NULL;
    - int ret = 0;
    + struct mem_cgroup *memcg;
    + int ret;

    if (mem_cgroup_disabled())
    - goto out;
    + return 0;

    - if (PageSwapCache(page)) {
    - swp_entry_t ent = { .val = page_private(page), };
    - unsigned short id;
    - memcg = get_mem_cgroup_from_mm(mm);
    ++ if (!mm) {
    ++ memcg = get_mem_cgroup_from_current();
    ++ if (!memcg)
    ++ memcg = get_mem_cgroup_from_mm(current->mm);
    ++ } else {
    ++ memcg = get_mem_cgroup_from_mm(mm);
    ++ }
    + ret = __mem_cgroup_charge(page, memcg, gfp_mask);
    + css_put(&memcg->css);

    - /*
    - * Every swap fault against a single page tries to charge the
    - * page, bail as early as possible. shmem_unuse() encounters
    - * already charged pages, too. page and memcg binding is
    - * protected by the page lock, which serializes swap cache
    - * removal, which in turn serializes uncharging.
    - */
    - VM_BUG_ON_PAGE(!PageLocked(page), page);
    - if (page_memcg(compound_head(page)))
    - goto out;
    + return ret;
    + }

    - id = lookup_swap_cgroup_id(ent);
    - rcu_read_lock();
    - memcg = mem_cgroup_from_id(id);
    - if (memcg && !css_tryget_online(&memcg->css))
    - memcg = NULL;
    - rcu_read_unlock();
    - }
    + /**
    + * mem_cgroup_swapin_charge_page - charge a newly allocated page for swapin
    + * @page: page to charge
    + * @mm: mm context of the victim
    + * @gfp: reclaim mode
    + * @entry: swap entry for which the page is allocated
    + *
    + * This function charges a page allocated for swapin. Please call this before
    + * adding the page to the swapcache.
    + *
    + * Returns 0 on success. Otherwise, an error code is returned.
    + */
    + int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm,
    + gfp_t gfp, swp_entry_t entry)
    + {
    + struct mem_cgroup *memcg;
    + unsigned short id;
    + int ret;

    - if (!memcg) {
    - if (!mm) {
    - memcg = get_mem_cgroup_from_current();
    - if (!memcg)
    - memcg = get_mem_cgroup_from_mm(current->mm);
    - } else {
    - memcg = get_mem_cgroup_from_mm(mm);
    - }
    - }
    + if (mem_cgroup_disabled())
    + return 0;

    - ret = try_charge(memcg, gfp_mask, nr_pages);
    - if (ret)
    - goto out_put;
    + id = lookup_swap_cgroup_id(entry);
    + rcu_read_lock();
    + memcg = mem_cgroup_from_id(id);
    + if (!memcg || !css_tryget_online(&memcg->css))
    + memcg = get_mem_cgroup_from_mm(mm);
    + rcu_read_unlock();

    - css_get(&memcg->css);
    - commit_charge(page, memcg);
    + ret = __mem_cgroup_charge(page, memcg, gfp);

    - local_irq_disable();
    - mem_cgroup_charge_statistics(memcg, page, nr_pages);
    - memcg_check_events(memcg, page);
    - local_irq_enable();
    + css_put(&memcg->css);
    + return ret;
    + }

    + /*
    + * mem_cgroup_swapin_uncharge_swap - uncharge swap slot
    + * @entry: swap entry for which the page is charged
    + *
    + * Call this function after successfully adding the charged page to swapcache.
    + *
    + * Note: This function assumes the page for which swap slot is being uncharged
    + * is order 0 page.
    + */
    + void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry)
    + {
    /*
    * Cgroup1's unified memory+swap counter has been charged with the
    * new swapcache page, finish the transfer by uncharging the swap
    [unhandled content-type:application/pgp-signature]
    \
     
     \ /
      Last update: 2021-03-18 07:19    [W:3.234 / U:0.012 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site