lkml.org 
[lkml]   [2017]   [Oct]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 3/3] mm: oom: show unreclaimable slab info when unreclaimable slabs > user memory
From
Date


On 10/4/17 7:27 AM, Michal Hocko wrote:
> On Wed 04-10-17 02:06:17, Yang Shi wrote:
>> +static bool is_dump_unreclaim_slabs(void)
>> +{
>> + unsigned long nr_lru;
>> +
>> + nr_lru = global_node_page_state(NR_ACTIVE_ANON) +
>> + global_node_page_state(NR_INACTIVE_ANON) +
>> + global_node_page_state(NR_ACTIVE_FILE) +
>> + global_node_page_state(NR_INACTIVE_FILE) +
>> + global_node_page_state(NR_ISOLATED_ANON) +
>> + global_node_page_state(NR_ISOLATED_FILE) +
>> + global_node_page_state(NR_UNEVICTABLE);
>> +
>> + return (global_node_page_state(NR_SLAB_UNRECLAIMABLE) > nr_lru);
>> +}
>
> I am sorry I haven't pointed this earlier (I was following only half
> way) but this should really be memcg aware. You are checking only global
> counters. I do not think it is an absolute must to provide per-memcg
> data but you should at least check !is_memcg_oom(oc).

OK, sure.

>
> [...]
>> +void dump_unreclaimable_slab(void)
>> +{
>> + struct kmem_cache *s, *s2;
>> + struct slabinfo sinfo;
>> +
>> + pr_info("Unreclaimable slab info:\n");
>> + pr_info("Name Used Total\n");
>> +
>> + /*
>> + * Here acquiring slab_mutex is risky since we don't prefer to get
>> + * sleep in oom path. But, without mutex hold, it may introduce a
>> + * risk of crash.
>> + * Use mutex_trylock to protect the list traverse, dump nothing
>> + * without acquiring the mutex.
>> + */
>> + if (!mutex_trylock(&slab_mutex))
>> + return;
>
> I would move the trylock up so that we do not get empty and confusing
> Unreclaimable slab info: and add a note that we are not dumping anything
> due to lock contention
> pr_warn("excessive unreclaimable slab memory but cannot dump stats to give you more details\n");

Thanks for pointing this. Will fix in new version.

Yang

>
> Other than that this looks sensible to me.
>
>> + list_for_each_entry_safe(s, s2, &slab_caches, list) {
>> + if (!is_root_cache(s) || (s->flags & SLAB_RECLAIM_ACCOUNT))
>> + continue;
>> +
>> + memset(&sinfo, 0, sizeof(sinfo));
>> + get_slabinfo(s, &sinfo);
>> +
>> + if (sinfo.num_objs > 0)
>> + pr_info("%-17s %10luKB %10luKB\n", cache_name(s),
>> + (sinfo.active_objs * s->size) / 1024,
>> + (sinfo.num_objs * s->size) / 1024);
>> + }
>> + mutex_unlock(&slab_mutex);
>> +}
>> +
>> #if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
>> void *memcg_slab_start(struct seq_file *m, loff_t *pos)
>> {
>> --
>> 1.8.3.1
>

\
 
 \ /
  Last update: 2017-10-04 19:39    [W:0.073 / U:0.124 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site