lkml.org 
[lkml]   [2022]   [Aug]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH 1/7] mm: introduce common struct mm_slot
From


On 2022/8/30 03:51, Andrew Morton wrote:
> On Mon, 29 Aug 2022 22:30:49 +0800 Qi Zheng <zhengqi.arch@bytedance.com> wrote:
>
>> At present, both THP and KSM module have similar structures
>> mm_slot for organizing and recording the information required
>> for scanning mm, and each defines the following exactly the
>> same operation functions:
>>
>> - alloc_mm_slot
>> - free_mm_slot
>> - get_mm_slot
>> - insert_to_mm_slots_hash
>>
>> In order to de-duplicate these codes, this patch introduces a
>> common struct mm_slot, and subsequent patches will let THP and
>> KSM to use it.
>
> Seems like a good idea.
>
>> --- /dev/null
>> +++ b/mm/mm_slot.h
>> @@ -0,0 +1,55 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +
>> +#ifndef _LINUX_MM_SLOT_H
>> +#define _LINUX_MM_SLOT_H
>> +
>> +#include <linux/hashtable.h>
>> +#include <linux/slab.h>
>> +
>> +/*
>> + * struct mm_slot - hash lookup from mm to mm_slot
>> + * @hash: link to the mm_slots hash list
>> + * @mm_node: link into the mm_slots list
>> + * @mm: the mm that this information is valid for
>> + */
>> +struct mm_slot {
>> + struct hlist_node hash;
>> + struct list_head mm_node;
>> + struct mm_struct *mm;
>> +};
>
> It appears that the presence of an mm_struct in the hash list does not
> contribute to the mm_struct's refcount? That's somewhat unexpected.

Hi,

The reason is that khugepaged_exit()/ksm_exit() will be called first in
__mmput() to remove mm from the linked list. So it is prevented the
mm_struct from getting freed while on the list.

>
> It would be helpful to add some words here describing the means by
> which a user of mm_slot would prevent the mm_struct from getting freed
> while on the list. I assume "caller must maintain a reference on the
> mm_struct while it remains on an mm_slot hash list"?
>
>> +#define mm_slot_entry(ptr, type, member) \
>> + container_of(ptr, type, member)
>> +
>> +static inline void *alloc_mm_slot(struct kmem_cache *cache)
>> +{
>> + if (!cache) /* initialization failed */
>> + return NULL;
>> + return kmem_cache_zalloc(cache, GFP_KERNEL);
>> +}
>> +
>> +static inline void free_mm_slot(struct kmem_cache *cache, void *objp)
>> +{
>> + kmem_cache_free(cache, objp);
>> +}
>> +
>> +#define get_mm_slot(_hashtable, _mm) \
>> +({ \
>> + struct mm_slot *tmp_slot, *mm_slot = NULL; \
>> + \
>> + hash_for_each_possible(_hashtable, tmp_slot, hash, (unsigned long)_mm) \
>> + if (_mm == tmp_slot->mm) { \
>> + mm_slot = tmp_slot; \
>> + break; \
>> + } \
>> + \
>> + mm_slot; \
>> +})
>
> Is there a reason why this must be implemented as a macro? That's

Since _hashtable is an array name, IIUC, this cannot be passed as a
function parameter, so I chose to implement it as a macro.

> preferable, although this may be overly large for inlining. mm/util.c
> might suit.
>
>> +#define insert_to_mm_slots_hash(_hashtable, _mm, _mm_slot) \
>> +({ \
>> + _mm_slot->mm = _mm; \
>> + hash_add(_hashtable, &_mm_slot->hash, (unsigned long)_mm); \
>> +})
>
> Does this need to be a macro?

Ditto.

>
>
> And the naming. Can we please have
>
> mm_slot_entry
> mm_slot_alloc
> mm_slot_free
> mm_slot_get
> mm_slot_insert
>
> Also, "get" usually implies that a refcout is taken on the obtained
> object, so mm_slot_lookup() would be more appropriate.

These names are better, will modify to it in the next version.

Thanks,
Qi

--
Thanks,
Qi

\
 
 \ /
  Last update: 2022-08-30 05:57    [W:0.053 / U:1.120 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site