lkml.org 
[lkml]   [2023]   [Jan]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2] rcu: Rework tick dependency setting into rcu_exp_handler()
On Fri, Jan 06, 2023 at 02:42:59AM +0000, Zhang, Qiang1 wrote:
> On Thu, Jan 05, 2023 at 11:40:00AM +0800, Zqiang wrote:
> > Currently, when first find out the expedited grace period is not end
> > and timeout occurred, we set tick dependency for CPUs that have not
> > yet reported the quiescent state in the rcu_node structure's->expmask
> > but need to eliminate races between set and clear tick dependency,
> > setting CPU tick dependency need to hold rcu_node structure's->lock.
> >
> > This commit move tick dependency setting into rcu_exp_handler(), set
> > tick dependency when fail to report a quiescent state and clear tick
> > dependency in rcu_report_exp_rdp(). [from Frederic Weisbecker
> > suggestion]
> >
> > Signed-off-by: Zqiang <qiang1.zhang@intel.com>
> >
> >First, a big "thank you" to you an Frederic for investigating this approach!
> >
> >So which is better, this patch or the one that I already have queued?
> >
> >The advantage of the patch of yours that I already have queued is that CPUs that respond in some other way within a millisecond do not get hit with an additional scheduling-clock interrupt.
> >
> >On the other hand, if the CPU goes through a quiescent state before the next scheduling-clock interrupt arrives, rcu_report_exp_cpu_mult() will shut down the tick before it happens. Plus if the CPU waits a full tick before reaching a quiescent state, then the tick_dep_set_cpu() called from
> >synchronize_rcu_expedited_wait() is going to send along an IPI anyway.
>
> Agreed, this new patch is set tick dependency immediately when we can't report a quiescent state
> in rcu_exp_handler(), this seems a little too aggressive.
>
>
> >
> >Except that invoking tick_dep_set_cpu() on the CPU itself will also do an IPI from tick_dep_set_cpu() because of IRQ_WORK_INIT_HARD(), right?
> >Which means that the patch below gets us an extra self-IPI, right?
> >Or am I misreading the code?
>
> Yes, This looks like it will trigger an additional IPI interrupt.
>
> >
> >In addition, doesn't tick_dep_clear_cpu() just clear a bit? Won't that mean that the next scheduling-clock interrupt will happen, just that the one after that won't? (Give or take kernel-to-user or kernel-to-idle transitions that might happen in the meantime.)
>
> Yes, tick_dep_clear_cpu() just only clear a bit. next scheduling-clock interrupt will happen.
>
> So I also want to know which one is better 😊?

Right, I may have misled you with this change. I missed the fact that a chance
is given for 1 jiffy to nohz_full CPUs to report a QS before the tick is forced
there.

Sorry about that. Your first patch is still a good fix though!

Thanks!

\
 
 \ /
  Last update: 2023-03-26 23:31    [W:0.053 / U:2.824 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site