Messages in this thread |  | | From | Thomas Gleixner <> | Subject | Re: [for-next][PATCH 13/25] x86/mm/kmmio: Use rcu_read_lock_sched_notrace() | Date | Sun, 11 Dec 2022 00:30:36 +0100 |
| |
On Sat, Dec 10 2022 at 13:34, Steven Rostedt wrote: > On Sat, 10 Dec 2022 09:47:53 -0800 "Paul E. McKenney" <paulmck@kernel.org> wrote: >> This does mess with preempt_count() redundantly, but the overhead from >> that should be way down in the noise. > > I was going to remove it, but then I realized that it would be a functional > change, as from the comment above, it uses "preempt_enable_no_resched(), > which there is not a rcu_read_unlock_sched() variant.
preempt_enable_no_resched() in this context is simply garbage.
preempt_enable_no_resched() tries to avoid the overhead of checking whether rescheduling is due after decrementing preempt_count() because the code which it this claims to know that it is _not_ the outermost one which brings preempt count back to preemtible state.
I concede that there are hot paths which actually can benefit, but this code has exactly _ZERO_ benefit from that. Taking that tracing exception and handling it is orders of magnitudes more expensive than a regular preempt_enable().
So just get rid of it and don't proliferate cargo cult programming.
Thanks,
tglx
|  |