lkml.org 
[lkml]   [2019]   [Dec]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: "rcu: React to callback overload by aggressively seeking quiescent states" hangs on boot
On Sun, Dec 15, 2019 at 11:18:43AM -0800, Dexuan-Linux Cui wrote:
> On Fri, Dec 13, 2019 at 10:41 PM Paul E. McKenney <paulmck@kernel.org> wrote:
> >
> > On Fri, Dec 13, 2019 at 06:11:16PM -0500, Qian Cai wrote:
> > >
> > >
> > > > On Dec 13, 2019, at 5:46 PM, Paul E. McKenney <paulmck@kernel.org> wrote:
> > > >
> > > > I am running this on a number of x86 systems, but will try it on a
> > >
> > > The config to reproduce includes several debugging options that might
> > > required to recreate.
> >
> > If you run without those debugging options, do you still see the hangs?
> > If not, please let me know which debugging are involved.
> >
> > > > wider variety. If I cannot reproduce it, would you be willing to
> > > > run diagnostics?
> > >
> > > Yes.
> >
> > Very good! Let me see what I can put together. (No luck reproducing
> > at my end thus far.)
> >
> > > > Just to double-check... Are you running rcutorture built into the kernel?
> > > > (My guess is "no", but figured that I should ask.)
> > >
> > > No as you can see from the config I linked in the original email.
> >
> > Fair point, and please accept my apologies for the pointless question.
> >
> > Thanx, Paul
>
> Hi,
> We're seeing the same hang issue with a recent Linux next-20191213
> kernel. If we revert the same commit 82150cb53dcb ("rcu: React to
> callback overload by aggressively seeking quiescent states”), the
> issue will go away.
>
> Note: we're running the x86-64 Linux VM Hyper-V, and the the torture
> test is not used:
>
> $ grep -i torture .config
> CONFIG_LOCK_TORTURE_TEST=m
> CONFIG_TORTURE_TEST=m
> # CONFIG_RCU_TORTURE_TEST is not set
>
> (FYI: the kernel config and the serial console log are attached).
>
> When the issue happens, I force a kernel panic by NMI several times
> and I can see the rcu_gp_kthread hangs at some places, but it looks
> all the places are in the below loop:
>
> (The first panic log is in the attachment)
> (gdb) l *(rcu_gp_kthread+0x703)
> 0xffffffff811128c3 is in rcu_gp_kthread (kernel/rcu/tree.c:1763).
> 1758 if (rnp == rdp->mynode)
> 1759 needgp = __note_gp_changes(rnp, rdp) || needgp;
> 1760 /* smp_mb() provided by prior unlock-lock pair. */
> 1761 needgp = rcu_future_gp_cleanup(rnp) || needgp;
> 1762 // Reset overload indication for CPUs no
> longer overloaded
> 1763 for_each_leaf_node_cpu_mask(rnp, cpu, rnp->cbovldmask) {
> 1764 rdp = per_cpu_ptr(&rcu_data, cpu);
> 1765 check_cb_ovld_locked(rdp, rnp);
> 1766 }
> 1767 sq = rcu_nocb_gp_get(rnp);

This is consistent with what I saw in Qian Cai's report, FYI. So I
am very interested in learning whether the first patch in my reply [1]
helps you.

Thanx, Paul

[1] https://lore.kernel.org/lkml/20191215201646.GK2889@paulmck-ThinkPad-P72/

\
 
 \ /
  Last update: 2019-12-15 21:21    [W:0.111 / U:0.084 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site