lkml.org 
[lkml]   [2022]   [Aug]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [RFC 06/10] rcu/hotplug: Make rcutree_dead_cpu() parallel
    On Wed, Aug 24, 2022 at 3:21 PM Paul E. McKenney <paulmck@kernel.org> wrote:
    >
    > On Wed, Aug 24, 2022 at 01:26:01PM -0400, Joel Fernandes wrote:
    > >
    > >
    > > On 8/24/2022 12:20 PM, Paul E. McKenney wrote:
    > > > On Wed, Aug 24, 2022 at 09:53:11PM +0800, Pingfan Liu wrote:
    > > >> On Tue, Aug 23, 2022 at 11:01 AM Paul E. McKenney <paulmck@kernel.org> wrote:
    > > >>>
    > > >>> On Tue, Aug 23, 2022 at 09:50:56AM +0800, Pingfan Liu wrote:
    > > >>>> On Sun, Aug 21, 2022 at 07:45:28PM -0700, Paul E. McKenney wrote:
    > > >>>>> On Mon, Aug 22, 2022 at 10:15:16AM +0800, Pingfan Liu wrote:
    > > >>>>>> In order to support parallel, rcu_state.n_online_cpus should be
    > > >>>>>> atomic_dec()
    > > >>>>>>
    > > >>>>>> Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
    > > >>>>>
    > > >>>>> I have to ask... What testing have you subjected this patch to?
    > > >>>>>
    > > >>>>
    > > >>>> This patch subjects to [1]. The series aims to enable kexec-reboot in
    > > >>>> parallel on all cpu. As a result, the involved RCU part is expected to
    > > >>>> support parallel.
    > > >>>
    > > >>> I understand (and even sympathize with) the expectation. But results
    > > >>> sometimes diverge from expectations. There have been implicit assumptions
    > > >>> in RCU about only one CPU going offline at a time, and I am not sure
    > > >>> that all of them have been addressed. Concurrent CPU onlining has
    > > >>> been looked at recently here:
    > > >>>
    > > >>> https://docs.google.com/document/d/1jymsaCPQ1PUDcfjIKm0UIbVdrJAaGX-6cXrmcfm0PRU/edit?usp=sharing
    > > >>>
    > > >>> You did us atomic_dec() to make rcu_state.n_online_cpus decrementing be
    > > >>> atomic, which is good. Did you look through the rest of RCU's CPU-offline
    > > >>> code paths and related code paths?
    > > >>
    > > >> I went through those codes at a shallow level, especially at each
    > > >> cpuhp_step hook in the RCU system.
    > > >
    > > > And that is fine, at least as a first step.
    > > >
    > > >> But as you pointed out, there are implicit assumptions about only one
    > > >> CPU going offline at a time, I will chew the google doc which you
    > > >> share. Then I can come to a final result.
    > > >
    > > > Boqun Feng, Neeraj Upadhyay, Uladzislau Rezki, and I took a quick look,
    > > > and rcu_boost_kthread_setaffinity() seems to need some help. As it
    > > > stands, it appears that concurrent invocations of this function from the
    > > > CPU-offline path will cause all but the last outgoing CPU's bit to be
    > > > (incorrectly) set in the cpumask_var_t passed to set_cpus_allowed_ptr().
    > > >
    > > > This should not be difficult to fix, for example, by maintaining a
    > > > separate per-leaf-rcu_node-structure bitmask of the concurrently outgoing
    > > > CPUs for that rcu_node structure. (Similar in structure to the
    > > > ->qsmask field.)
    > > >
    > > > There are probably more where that one came from. ;-)
    > >
    > > Should rcutree_dying_cpu() access to rnp->qsmask have a READ_ONCE() ? I was
    > > thinking grace period initialization or qs reporting paths racing with that. Its
    > > just tracing, still :)
    >
    > Looks like it should be regardless of Pingfan's patches, given that
    > the grace-period kthread might report a quiescent state concurrently.

    Thanks for confirming, I'll queue it into my next revision of the series.

    - Joel

    \
     
     \ /
      Last update: 2022-08-25 00:55    [W:2.462 / U:0.020 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site