lkml.org 
[lkml]   [2022]   [Aug]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH v5 1/1] rcu: Simplify the code logic of rcu_init_nohz()
On Thu, Aug 25, 2022 at 05:23:11PM +0800, Zhen Lei wrote:
> When CONFIG_RCU_NOCB_CPU_DEFAULT_ALL=y or CONFIG_NO_HZ_FULL=y, additional
> CPUs need to be added to 'rcu_nocb_mask'. But 'rcu_nocb_mask' may be not
> available now, due to 'rcu_nocbs' is not specified. Check and initialize
> 'rcu_nocb_mask' before using it. This code simplification strictly follows
> this logic, compared with old implementations, unnecessary crossovers are
> avoided and easy to understand.
>
> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>

Much nicer, thank you!

As usual, I could not resist the urge to wordsmith. Could you please
check to make sure that I did not mess anything up?

Thanx, Paul

------------------------------------------------------------------------

commit 4ac3b3d1a19943b1522c0b1d0895aefbb80ec294
Author: Zhen Lei <thunder.leizhen@huawei.com>
Date: Thu Aug 25 17:23:11 2022 +0800

rcu: Simplify rcu_init_nohz() cpumask handling

In kernels built with either CONFIG_RCU_NOCB_CPU_DEFAULT_ALL=y or
CONFIG_NO_HZ_FULL=y, additional CPUs must be added to rcu_nocb_mask.
Except that kernels booted without the rcu_nocb_mask= will not have
allocated rcu_nocb_mask. And the current rcu_init_nohz() function uses
its need_rcu_nocb_mask and offload_all local variables to track the
rcu_nocb and nohz_full state.

But there is a much simpler approach, namely creating a cpumask pointer
to track the default and then using cpumask_available() to check the
rcu_nocb_mask state. This commit takes this approach, thereby simplifying
and shortening the rcu_init_nohz() function.

Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index 0a5f0ef414845..c8167be2288fa 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -1210,45 +1210,31 @@ EXPORT_SYMBOL_GPL(rcu_nocb_cpu_offload);
void __init rcu_init_nohz(void)
{
int cpu;
- bool need_rcu_nocb_mask = false;
- bool offload_all = false;
struct rcu_data *rdp;
+ const struct cpumask *cpumask = NULL;

#if defined(CONFIG_RCU_NOCB_CPU_DEFAULT_ALL)
- if (!rcu_state.nocb_is_setup) {
- need_rcu_nocb_mask = true;
- offload_all = true;
- }
-#endif /* #if defined(CONFIG_RCU_NOCB_CPU_DEFAULT_ALL) */
-
-#if defined(CONFIG_NO_HZ_FULL)
- if (tick_nohz_full_running && !cpumask_empty(tick_nohz_full_mask)) {
- need_rcu_nocb_mask = true;
- offload_all = false; /* NO_HZ_FULL has its own mask. */
- }
-#endif /* #if defined(CONFIG_NO_HZ_FULL) */
+ cpumask = cpu_possible_mask;
+#elif defined(CONFIG_NO_HZ_FULL)
+ if (tick_nohz_full_running && !cpumask_empty(tick_nohz_full_mask))
+ cpumask = tick_nohz_full_mask;
+#endif

- if (need_rcu_nocb_mask) {
+ if (cpumask) {
if (!cpumask_available(rcu_nocb_mask)) {
if (!zalloc_cpumask_var(&rcu_nocb_mask, GFP_KERNEL)) {
pr_info("rcu_nocb_mask allocation failed, callback offloading disabled.\n");
return;
}
}
+
+ cpumask_or(rcu_nocb_mask, rcu_nocb_mask, cpumask);
rcu_state.nocb_is_setup = true;
}

if (!rcu_state.nocb_is_setup)
return;

-#if defined(CONFIG_NO_HZ_FULL)
- if (tick_nohz_full_running)
- cpumask_or(rcu_nocb_mask, rcu_nocb_mask, tick_nohz_full_mask);
-#endif /* #if defined(CONFIG_NO_HZ_FULL) */
-
- if (offload_all)
- cpumask_setall(rcu_nocb_mask);
-
if (!cpumask_subset(rcu_nocb_mask, cpu_possible_mask)) {
pr_info("\tNote: kernel parameter 'rcu_nocbs=', 'nohz_full', or 'isolcpus=' contains nonexistent CPUs.\n");
cpumask_and(rcu_nocb_mask, cpu_possible_mask,
\
 
 \ /
  Last update: 2022-08-25 19:27    [W:0.070 / U:0.404 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site