lkml.org 
[lkml]   [2020]   [May]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch V6 12/37] x86/entry: Provide idtentry_entry/exit_cond_rcu()
On Tue, May 19, 2020 at 05:26:58PM -0700, Andy Lutomirski wrote:
> On Tue, May 19, 2020 at 2:20 PM Thomas Gleixner <tglx@linutronix.de> wrote:
> >
> > Andy Lutomirski <luto@kernel.org> writes:
> > > On Tue, May 19, 2020 at 1:20 PM Thomas Gleixner <tglx@linutronix.de> wrote:
> > >> Thomas Gleixner <tglx@linutronix.de> writes:
> > >> It's about this:
> > >>
> > >> rcu_nmi_enter()
> > >> {
> > >> if (!rcu_is_watching()) {
> > >> make it watch;
> > >> } else if (!in_nmi()) {
> > >> do_magic_nohz_dyntick_muck();
> > >> }
> > >>
> > >> So if we do all irq/system vector entries conditional then the
> > >> do_magic() gets never executed. After that I got lost...
> > >
> > > I'm also baffled by that magic, but I'm also not suggesting doing this
> > > to *all* entries -- just the not-super-magic ones that use
> > > idtentry_enter().
> > >
> > > Paul, what is this code actually trying to do?
> >
> > Citing Paul from IRC:
> >
> > "The way things are right now, you can leave out the rcu_irq_enter()
> > if this is not a nohz_full CPU.
> >
> > Or if this is a nohz_full CPU, and the tick is already
> > enabled, in that case you could also leave out the rcu_irq_enter().
> >
> > Or even if this is a nohz_full CPU and it does not have the tick
> > enabled, if it has been in the kernel less than a few tens of
> > milliseconds, still OK to avoid invoking rcu_irq_enter()
> >
> > But my guess is that it would be a lot simpler to just always call
> > it.
> >
> > Hope that helps.
>
> Maybe?
>
> Unless I've missed something, the effect here is that #PF hitting in
> an RCU-watching context will skip rcu_irq_enter(), whereas all IRQs
> (because you converted them) as well as other faults and traps will
> call rcu_irq_enter().
>
> Once upon a time, we did this horrible thing where, on entry from user
> mode, we would turn on interrupts while still in CONTEXT_USER, which
> means we could get an IRQ in an extended quiescent state. This means
> that the IRQ code had to end the EQS so that IRQ handlers could use
> RCU. But I killed this a few years ago -- x86 Linux now has a rule
> that, if IF=1, we are *not* in an EQS with the sole exception of the
> idle code.
>
> In my dream world, we would never ever get IRQs while in an EQS -- we
> would do MWAIT with IF=0 and we would exit the EQS before taking the
> interrupt. But I guess we still need to support HLT, which means we
> have this mess.
>
> But I still think we can plausibly get rid of the conditional.

You mean the conditional in rcu_nmi_enter()? In a NO_HZ_FULL=n system,
this becomes:

if (!rcu_is_watching()) {
make it watch;
} else if (!in_nmi()) {
instrumentation_begin();
if (tick_nohz_full_cpu(rdp->cpu) && ... {
do stuff
}
instrumentation_end();
}

But tick_nohz_full_cpu() is compile-time-known false, so as long as the
compiler can ditch the instrumentation_begin() and instrumentation_end(),
the entire "else if" clause evaporates.

> If we
> get an IRQ or (egads!) a fault in idle context, we'll have
> !__rcu_is_watching(), but, AFAICT, we also have preemption off.

Or we could be early in the kernel-entry code or late in the kernel-exit
code, but as far as I know, preemption is disabled on those code paths.
As are interrupts, right? And interrupts are disabled on the portions
of the CPU-hotplug code where RCU is not watching, if I recall correctly.

I am guessing that interrupts from userspace are not at issue here, but
completeness and all that.

> So it
> should be okay to do rcu_irq_enter(). OTOH, if we get an IRQ or a
> fault anywhere else, then we either have a severe bug in the RCU code
> itself and the RCU code faulted (in which case we get what we deserve)
> or RCU is watching and all is well. This means that the rule will be
> that, if preemption is on, it's fine to schedule inside an
> idtentry_begin()/idtentry_end() pair.

On this, I must defer to you guys.

> The remaining bit is just the urgent thing, and I don't understand
> what's going on. Paul, could we split out the urgent logic all by
> itself so that the IRQ handlers could do rcu_poke_urgent()? Or am I
> entirely misunderstanding its purpose?

A nohz_full CPU does not enable the scheduling-clock interrupt upon
entry to the kernel. Normally, this is fine because that CPU will very
quickly exit back to nohz_full userspace execution, so that RCU will
see the quiescent state, either by sampling it directly or by deducing
the CPU's passage through that quiescent state by comparing with state
that was captured earlier. The grace-period kthread notices the lack
of a quiescent state and will eventually set ->rcu_urgent_qs to
trigger this code.

But if the nohz_full CPU stays in the kernel for an extended time,
perhaps due to OOM handling or due to processing of some huge I/O that
hits in-memory buffers/cache, then RCU needs some way of detecting
quiescent states on that CPU. This requires the scheduling-clock
interrupt to be alive and well.

Are there other ways to get this done? But of course! RCU could
for example use smp_call_function_single() or use workqueues to force
execution onto that CPU and enable the tick that way. This gets a
little involved in order to avoid deadlock, but if the added check
in rcu_nmi_enter() is causing trouble, something can be arranged.
Though that something would cause more latency excursions than
does the current code.

Or did you have something else in mind?

Thanx, Paul

\
 
 \ /
  Last update: 2020-05-20 04:24    [W:0.156 / U:0.752 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site