lkml.org 
[lkml]   [2022]   [Aug]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH stable 4.14,4.19 1/1] mm: Fix page counter mismatch in shmem_mfill_atomic_pte
From
Cc Greg

On 2022/8/2 9:32, Wupeng Ma wrote:
> From: Ma Wupeng <mawupeng1@huawei.com>
>
> shmem_mfill_atomic_pte() wrongly called mem_cgroup_cancel_charge() in "success"
> path, it should mem_cgroup_uncharge() to dec memory counter instead.
> mem_cgroup_cancel_charge() should only be used if this transaction is
> unsuccessful and mem_cgroup_uncharge() is used to do this if this transaction
> succeed.
>
> This will lead to page->memcg not null and will uncharge one more in put_page().
> The page counter will underflow to maximum value and trigger oom to kill all
> process include sshd and leave system unaccessible.
>
> page->memcg is set in the following path:
> mem_cgroup_commit_charge
> commit_charge
> page->mem_cgroup = memcg;
>
> extra uncharge will be done in the following path:
> put_page
> __put_page
> __put_single_page
> mem_cgroup_uncharge
> if (!page->mem_cgroup) <-- should return here
> return
> uncharge_page
> uncharge_batch
>
> To fix this, call mem_cgroup_commit_charge() at the end of this transaction to
> make sure this transaction is really finished.
>
> Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support")
> Signed-off-by: Ma Wupeng <mawupeng1@huawei.com>
> ---
> mm/shmem.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 0788616696dc..0b06724c189e 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -2339,8 +2339,6 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
> if (ret)
> goto out_release_uncharge;
>
> - mem_cgroup_commit_charge(page, memcg, false, false);
> -
> _dst_pte = mk_pte(page, dst_vma->vm_page_prot);
> if (dst_vma->vm_flags & VM_WRITE)
> _dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte));
> @@ -2366,6 +2364,8 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
> if (!pte_none(*dst_pte))
> goto out_release_uncharge_unlock;
>
> + mem_cgroup_commit_charge(page, memcg, false, false);
> +
> lru_cache_add_anon(page);
>
> spin_lock_irq(&info->lock);

\
 
 \ /
  Last update: 2022-08-16 08:39    [W:0.082 / U:1.972 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site