lkml.org 
[lkml]   [2024]   [Apr]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
SubjectRe: [PATCH 00/30] PREEMPT_AUTO: support lazy rescheduling
Date

Shrikanth Hegde <sshegde@linux.ibm.com> writes:

> On 4/23/24 9:43 PM, Linus Torvalds wrote:
>> On Tue, 23 Apr 2024 at 08:23, Shrikanth Hegde <sshegde@linux.ibm.com> wrote:
>>>
>>>
>>> Are these the only arch bits that need to be defined? am I missing something very
>>> basic here? will try to debug this further. Any inputs?
>>
>> I don't think powerpc uses the generic *_exit_to_user_mode() helper
>> functions, so you'll need to also add that logic to the low-level
>> powerpc code.
>>
>> IOW, on x86, with this patch series, patch 06/30 did this:
>>
>> - if (ti_work & _TIF_NEED_RESCHED)
>> + if (ti_work & (_TIF_NEED_RESCHED | _TIF_NEED_RESCHED_LAZY))
>> schedule();
>>
>> in kernel/entry/common.c exit_to_user_mode_loop().
>>
>> But that works on x86 because it uses the irqentry_exit_to_user_mode().
>>
>> On PowerPC, I think you need to at least fix up
>>
>> interrupt_exit_user_prepare_main()
>>
>> similarly (and any other paths like that - I used to know the powerpc
>> code, but that was long long LOOONG ago).
>>
>> Linus
>
> Thank you Linus for the pointers. That indeed did the trick.
>
> diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
> index eca293794a1e..f0f38bf5cea9 100644
> --- a/arch/powerpc/kernel/interrupt.c
> +++ b/arch/powerpc/kernel/interrupt.c
> @@ -185,7 +185,7 @@ interrupt_exit_user_prepare_main(unsigned long ret, struct pt_regs *regs)
> ti_flags = read_thread_flags();
> while (unlikely(ti_flags & (_TIF_USER_WORK_MASK & ~_TIF_RESTORE_TM))) {
> local_irq_enable();
> - if (ti_flags & _TIF_NEED_RESCHED) {
> + if (ti_flags & (_TIF_NEED_RESCHED | _TIF_NEED_RESCHED_LAZY) ) {
> schedule();
> } else {
>
>
> By adding LAZY checks in interrupt_exit_user_prepare_main, softlockup is no longer seen and
> hackbench results are more or less same on smaller system(96CPUS).

Great. I'm guessing these tests are when running in voluntary preemption
mode (under PREEMPT_AUTO).

If you haven't, could you also try full preemption? There you should see
identical results unless something is horribly wrong.

> However, I still see 20-50%
> regression on the larger system(320 CPUS). I will continue to debug why.

Could you try this patch? This is needed because PREEMPT_AUTO turns on
CONFIG_PREEMPTION, but not CONFIG_PREEMPT:

diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
index eca293794a1e..599410050f6b 100644
--- a/arch/powerpc/kernel/interrupt.c
+++ b/arch/powerpc/kernel/interrupt.c
@@ -396,7 +396,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs)
/* Returning to a kernel context with local irqs enabled. */
WARN_ON_ONCE(!(regs->msr & MSR_EE));
again:
- if (IS_ENABLED(CONFIG_PREEMPT)) {
+ if (IS_ENABLED(CONFIG_PREEMPTION)) {
/* Return to preemptible kernel context */
if (unlikely(read_thread_flags() & _TIF_NEED_RESCHED)) {
if (preempt_count() == 0)

--
ankur

\
 
 \ /
  Last update: 2024-04-26 21:02    [W:0.302 / U:0.176 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site