lkml.org 
[lkml]   [2020]   [Mar]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm: fork: fix kernel_stack memcg stats for various stack implementations
On Tue, 3 Mar 2020 15:35:50 -0800 Roman Gushchin <guro@fb.com> wrote:

> Depending on CONFIG_VMAP_STACK and the THREAD_SIZE / PAGE_SIZE ratio
> the space for task stacks can be allocated using __vmalloc_node_range(),
> alloc_pages_node() and kmem_cache_alloc_node(). In the first and the
> second cases page->mem_cgroup pointer is set, but in the third it's
> not: memcg membership of a slab page should be determined using the
> memcg_from_slab_page() function, which looks at
> page->slab_cache->memcg_params.memcg . In this case, using
> mod_memcg_page_state() (as in account_kernel_stack()) is incorrect:
> page->mem_cgroup pointer is NULL even for pages charged to a non-root
> memory cgroup.
>
> It can lead to kernel_stack per-memcg counters permanently showing 0
> on some architectures (depending on the configuration).
>
> In order to fix it, let's introduce a mod_memcg_obj_state() helper,
> which takes a pointer to a kernel object as a first argument, uses
> mem_cgroup_from_obj() to get a RCU-protected memcg pointer and
> calls mod_memcg_state(). It allows to handle all possible
> configurations (CONFIG_VMAP_STACK and various THREAD_SIZE/PAGE_SIZE
> values) without spilling any memcg/kmem specifics into fork.c .
>
> Note: this patch has been first posted as a part of the new slab
> controller patchset. This is a slightly updated version: the fixes
> tag has been added and the commit log was extended by the advice
> of Johannes Weiner. Because it's a fix that makes sense by itself,
> I'm re-posting it as a standalone patch.

Actually, it isn't a standalone patch.

> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -776,6 +776,17 @@ void __mod_lruvec_slab_state(void *p, enum node_stat_item idx, int val)
> rcu_read_unlock();
> }
>
> +void mod_memcg_obj_state(void *p, int idx, int val)
> +{
> + struct mem_cgroup *memcg;
> +
> + rcu_read_lock();
> + memcg = mem_cgroup_from_obj(p);
> + if (memcg)
> + mod_memcg_state(memcg, idx, val);
> + rcu_read_unlock();
> +}

mem_cgroup_from_obj() is later added by
http://lkml.kernel.org/r/20200117203609.3146239-1-guro@fb.com

We could merge both mm-memcg-slab-introduce-mem_cgroup_from_obj.patch
and this patch, but that's a whole lot of stuff to backport into
-stable.

Are you able to come up with a simpler suitable-for-stable fix?

\
 
 \ /
  Last update: 2020-03-22 00:50    [W:0.063 / U:0.060 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site