Messages in this thread Patch in this message |  | | Date | Sat, 17 Dec 2022 18:55:35 -0000 | From | "tip-bot2 for Peter Zijlstra" <> | Subject | [tip: x86/mm] mm: Move mm_cachep initialization to mm_init() |
| |
The following commit has been merged into the x86/mm branch of tip:
Commit-ID: af80602799681c78f14fbe20b6185a56020dedee Gitweb: https://git.kernel.org/tip/af80602799681c78f14fbe20b6185a56020dedee Author: Peter Zijlstra <peterz@infradead.org> AuthorDate: Tue, 25 Oct 2022 21:38:18 +02:00 Committer: Dave Hansen <dave.hansen@linux.intel.com> CommitterDate: Thu, 15 Dec 2022 10:37:26 -08:00
mm: Move mm_cachep initialization to mm_init()
In order to allow using mm_alloc() much earlier, move initializing mm_cachep into mm_init().
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20221025201057.751153381@infradead.org --- include/linux/sched/task.h | 1 + init/main.c | 1 + kernel/fork.c | 32 ++++++++++++++++++-------------- 3 files changed, 20 insertions(+), 14 deletions(-)
diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h index d6c4816..8431558 100644 --- a/include/linux/sched/task.h +++ b/include/linux/sched/task.h @@ -65,6 +65,7 @@ extern void sched_dead(struct task_struct *p); void __noreturn do_task_dead(void); void __noreturn make_task_dead(int signr); +extern void mm_cache_init(void); extern void proc_caches_init(void); extern void fork_init(void); diff --git a/init/main.c b/init/main.c index aa21add..f1d1a54 100644 --- a/init/main.c +++ b/init/main.c @@ -860,6 +860,7 @@ static void __init mm_init(void) /* Should be run after espfix64 is set up. */ pti_init(); kmsan_init_runtime(); + mm_cache_init(); } #ifdef CONFIG_RANDOMIZE_KSTACK_OFFSET diff --git a/kernel/fork.c b/kernel/fork.c index 08969f5..451ce80 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -3015,10 +3015,27 @@ static void sighand_ctor(void *data) init_waitqueue_head(&sighand->signalfd_wqh); } -void __init proc_caches_init(void) +void __init mm_cache_init(void) { unsigned int mm_size; + /* + * The mm_cpumask is located at the end of mm_struct, and is + * dynamically sized based on the maximum CPU number this system + * can have, taking hotplug into account (nr_cpu_ids). + */ + mm_size = sizeof(struct mm_struct) + cpumask_size(); + + mm_cachep = kmem_cache_create_usercopy("mm_struct", + mm_size, ARCH_MIN_MMSTRUCT_ALIGN, + SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, + offsetof(struct mm_struct, saved_auxv), + sizeof_field(struct mm_struct, saved_auxv), + NULL); +} + +void __init proc_caches_init(void) +{ sighand_cachep = kmem_cache_create("sighand_cache", sizeof(struct sighand_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU| @@ -3036,19 +3053,6 @@ void __init proc_caches_init(void) SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, NULL); - /* - * The mm_cpumask is located at the end of mm_struct, and is - * dynamically sized based on the maximum CPU number this system - * can have, taking hotplug into account (nr_cpu_ids). - */ - mm_size = sizeof(struct mm_struct) + cpumask_size(); - - mm_cachep = kmem_cache_create_usercopy("mm_struct", - mm_size, ARCH_MIN_MMSTRUCT_ALIGN, - SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, - offsetof(struct mm_struct, saved_auxv), - sizeof_field(struct mm_struct, saved_auxv), - NULL); vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_ACCOUNT); mmap_init(); nsproxy_cache_init();
|  |