lkml.org 
[lkml]   [2021]   [Dec]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v2 3/7] cpu/hotplug: Add dynamic parallel bringup states before CPUHP_BRINGUP_CPU
From
Date
On Wed, 2021-12-15 at 11:10 +0000, Mark Rutland wrote:
> On Tue, Dec 14, 2021 at 08:32:29PM +0000, David Woodhouse wrote:
> > On Tue, 2021-12-14 at 14:24 +0000, Mark Rutland wrote:
> > > On Tue, Dec 14, 2021 at 12:32:46PM +0000, David Woodhouse wrote:
> > > > From: David Woodhouse <
> > > > dwmw@amazon.co.uk
> > > >
> > > >
> > > > If the platform registers these states, bring all CPUs to each registered
> > > > state in turn, before the final bringup to CPUHP_BRINGUP_CPU. This allows
> > > > the architecture to parallelise the slow asynchronous tasks like sending
> > > > INIT/SIPI and waiting for the AP to come to life.
> > > >
> > > > There is a subtlety here: even with an empty CPUHP_BP_PARALLEL_DYN step,
> > > > this means that *all* CPUs are brought through the prepare states and to
> > > > CPUHP_BP_PREPARE_DYN before any of them are taken to CPUHP_BRINGUP_CPU
> > > > and then are allowed to run for themselves to CPUHP_ONLINE.
> > > >
> > > > So any combination of prepare/start calls which depend on A-B ordering
> > > > for each CPU in turn, such as the X2APIC code which used to allocate a
> > > > cluster mask 'just in case' and store it in a global variable in the
> > > > prep stage, then potentially consume that preallocated structure from
> > > > the AP and set the global pointer to NULL to be reallocated in
> > > > CPUHP_X2APIC_PREPARE for the next CPU... would explode horribly.
> > > >
> > > > We believe that X2APIC was the only such case, for x86. But this is why
> > > > it remains an architecture opt-in. For now.
> > >
> > > It might be worth elaborating with a non-x86 example, e.g.
> > >
> > > > We believe that X2APIC was the only such case, for x86. Other architectures
> > > > have similar requirements with global variables used during bringup (e.g.
> > > > `secondary_data` on arm/arm64), so architectures must opt-in for now.
> > >
> > > ... so that we have a specific example of how unconditionally enabling this for
> > > all architectures would definitely break things today.
> >
> > I do not have such an example, and I do not know that it would
> > definitely break things to turn it on for all architectures today.
> >
> > The x2apic one is an example of why it *might* break random
> > architectures and thus why it needs to be an architecture opt-in.
>
> Ah; I had thought we did the `secondary_data` setup in a PREPARE step, and
> hence it was a comparable example, but I was mistaken. Sorry for the noise!
>

Right, that's entirely within your __cpu_up(). You can stare at
Thomas's patch for inspiration on how to cope with that one.

In arch/arm64/kernel/smp.c you have a comment saying

* as from 2.5, kernels no longer have an init_tasks structure
* so we need some other way of telling a new secondary core
* where to place its SVC stack

In x86, the idle task pointer is in the per_cpu data. The real mode
bringup now starts with the CPU's APICID (which it can get from CPUID),
looks that up in the cpuid_to_apicid[] array to find the CPU#, then
finds its own per_cpu data, and gets everything else it needs
(including the initial stack) from there.

> > > FWIW, that's something I would like to cleanup for arm64 for general
> > > robustness, and if that would make it possible for us to have parallel bringup
> > > in future that would be a nice bonus.
> >
> > Yes. But although I lay the groundwork here, the arch can't *actually*
> > do parallel bringup without some arch-specific work, so auditing the
> > pre-bringup states is the easy part. :)
>
> Sure; that was trying to be a combination of:
>
> * This looks nice, I'd like to use this (eventually) on arm64.
>
> * I'm aware of some arm64-specific groundwork we need to do before arm64 can
> use this.
>
> So I think we're agreed. :)

I'd love to have at least one more architecture come along for the ride
as I do the next step. After this series, the largest chunk of time
seems to be spent waiting for each AP as they transition to
CPUHP_AP_ONLINE_IDLE and then all the way to CPUHP_ONLINE.

So I'm going to look at making bringup_nonboot_cpus() prod *all* the
APs to move to CPUHP_AP_ONLINE_IDLE without waiting for them to get
there. Then do another pass waiting for that and prodding them to move
to CPUHP_ONLINE. And then do a final pass of waiting for them to have
got *there*.


> > + int n = setup_max_cpus - num_online_cpus();
> > +
> > + /* ∀ parallel pre-bringup state, bring N CPUs to it */
>
> I see you have a fancy maths keyboard. ;)

Nah, standard UK layout keyboard. I just happen to remember U+2200 as
it's *right* at the beginning of the mathematical symbols block and is
fairly easy to type ;)

> It might be worth using a few more words here for clarity, e.g.
>
> /*
> * Bring all nonboot CPUs through each pre-bringup state in turn
> */

But it isn't *all* nonboot CPUs; it really is only up to N of them.
[unhandled content-type:application/pkcs7-signature]
\
 
 \ /
  Last update: 2021-12-15 16:17    [W:2.247 / U:0.056 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site