lkml.org 
[lkml]   [2020]   [May]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v3 12/19] mm: memcg/slab: use a single set of kmem_caches for all accounted allocations
From
Date
On 4/22/20 10:47 PM, Roman Gushchin wrote:
> This is fairly big but mostly red patch, which makes all accounted
> slab allocations use a single set of kmem_caches instead of
> creating a separate set for each memory cgroup.
>
> Because the number of non-root kmem_caches is now capped by the number
> of root kmem_caches, there is no need to shrink or destroy them
> prematurely. They can be perfectly destroyed together with their
> root counterparts. This allows to dramatically simplify the
> management of non-root kmem_caches and delete a ton of code.
>
> This patch performs the following changes:
> 1) introduces memcg_params.memcg_cache pointer to represent the
> kmem_cache which will be used for all non-root allocations
> 2) reuses the existing memcg kmem_cache creation mechanism
> to create memcg kmem_cache on the first allocation attempt
> 3) memcg kmem_caches are named <kmemcache_name>-memcg,
> e.g. dentry-memcg
> 4) simplifies memcg_kmem_get_cache() to just return memcg kmem_cache
> or schedule it's creation and return the root cache
> 5) removes almost all non-root kmem_cache management code
> (separate refcounter, reparenting, shrinking, etc)
> 6) makes slab debugfs to display root_mem_cgroup css id and never
> show :dead and :deact flags in the memcg_slabinfo attribute.
>
> Following patches in the series will simplify the kmem_cache creation.
>
> Signed-off-by: Roman Gushchin <guro@fb.com>
> ---
> include/linux/memcontrol.h | 5 +-
> include/linux/slab.h | 5 +-
> mm/memcontrol.c | 163 +++-----------
> mm/slab.c | 16 +-
> mm/slab.h | 145 ++++---------
> mm/slab_common.c | 426 ++++---------------------------------
> mm/slub.c | 38 +---
> 7 files changed, 128 insertions(+), 670 deletions(-)

Nice stats.

Reviewed-by: Vlastimil Babka <vbabka@suse.cz>

> @@ -548,17 +502,14 @@ static __always_inline int charge_slab_page(struct page *page,
> gfp_t gfp, int order,
> struct kmem_cache *s)
> {
> -#ifdef CONFIG_MEMCG_KMEM

Ah, indeed. Still, less churn if ref manipulation was done in
memcg_alloc/free_page_obj() ?

> if (!is_root_cache(s)) {
> int ret;
>
> ret = memcg_alloc_page_obj_cgroups(page, gfp, objs_per_slab(s));
> if (ret)
> return ret;
> -
> - percpu_ref_get_many(&s->memcg_params.refcnt, 1 << order);
> }
> -#endif
> +
> mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
> PAGE_SIZE << order);
> return 0;

\
 
 \ /
  Last update: 2020-05-26 12:13    [W:0.360 / U:0.540 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site