Messages in this thread Patch in this message | | | From | Ankur Arora <> | Subject | [PATCH 18/30] sched: prepare for lazy rescheduling in resched_curr() | Date | Mon, 12 Feb 2024 21:55:42 -0800 |
| |
Handle NR_lazy in resched_curr(), by registering an intent to reschedule at exit-to-user. Given that the rescheduling is not imminent, skip the preempt folding and the resched IPI.
Also, update set_nr_and_not_polling() to handle NR_lazy. Note that there are no changes to set_nr_if_polling(), since lazy rescheduling is not meaningful for idle.
And finally, now that there are two need-resched bits, enforce a priority order while setting them.
Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Juri Lelli <juri.lelli@redhat.com> Cc: Vincent Guittot <vincent.guittot@linaro.org> Originally-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/lkml/87jzshhexi.ffs@tglx/ Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com> --- kernel/sched/core.c | 41 +++++++++++++++++++++++++++++------------ 1 file changed, 29 insertions(+), 12 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 425e4f03e0af..7248d1dbdc14 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -899,14 +899,14 @@ static inline void hrtick_rq_init(struct rq *rq) #if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG) /* - * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG, + * Atomically set TIF_NEED_RESCHED[_LAZY] and test for TIF_POLLING_NRFLAG, * this avoids any races wrt polling state changes and thereby avoids * spurious IPIs. */ -static inline bool set_nr_and_not_polling(struct task_struct *p) +static inline bool set_nr_and_not_polling(struct task_struct *p, resched_t rs) { struct thread_info *ti = task_thread_info(p); - return !(fetch_or(&ti->flags, _TIF_NEED_RESCHED) & _TIF_POLLING_NRFLAG); + return !(fetch_or(&ti->flags, _tif_resched(rs)) & _TIF_POLLING_NRFLAG); } /* @@ -931,9 +931,9 @@ static bool set_nr_if_polling(struct task_struct *p) } #else -static inline bool set_nr_and_not_polling(struct task_struct *p) +static inline bool set_nr_and_not_polling(struct task_struct *p, resched_t rs) { - set_tsk_need_resched(p, NR_now); + set_tsk_need_resched(p, rs); return true; } @@ -1041,25 +1041,40 @@ void wake_up_q(struct wake_q_head *head) void resched_curr(struct rq *rq) { struct task_struct *curr = rq->curr; + resched_t rs = NR_now; int cpu; lockdep_assert_rq_held(rq); - if (test_tsk_need_resched(curr, NR_now)) + /* + * TIF_NEED_RESCHED is the higher priority bit, so if it is already + * set, nothing more to be done. So, the only combinations we want to + * let in are: + * + * - . + (NR_now | NR_lazy) + * - NR_lazy + NR_now + * + * In the second case both flags would be set simultaneously. + */ + if (test_tsk_need_resched(curr, NR_now) || + (rs == NR_lazy && test_tsk_need_resched(curr, NR_lazy))) return; cpu = cpu_of(rq); if (cpu == smp_processor_id()) { - set_tsk_need_resched(curr, NR_now); - set_preempt_need_resched(); + set_tsk_need_resched(curr, rs); + if (rs == NR_now) + set_preempt_need_resched(); return; } - if (set_nr_and_not_polling(curr)) - smp_send_reschedule(cpu); - else + if (set_nr_and_not_polling(curr, rs)) { + if (rs == NR_now) + smp_send_reschedule(cpu); + } else { trace_sched_wake_idle_without_ipi(cpu); + } } void resched_cpu(int cpu) @@ -1154,7 +1169,7 @@ static void wake_up_idle_cpu(int cpu) * and testing of the above solutions didn't appear to report * much benefits. */ - if (set_nr_and_not_polling(rq->idle)) + if (set_nr_and_not_polling(rq->idle, NR_now)) smp_send_reschedule(cpu); else trace_sched_wake_idle_without_ipi(cpu); @@ -6693,6 +6708,8 @@ static void __sched notrace __schedule(unsigned int sched_mode) } next = pick_next_task(rq, prev, &rf); + + /* Clear both TIF_NEED_RESCHED, TIF_NEED_RESCHED_LAZY */ clear_tsk_need_resched(prev); clear_preempt_need_resched(); #ifdef CONFIG_SCHED_DEBUG -- 2.31.1
| |