lkml.org 
[lkml]   [2022]   [Jun]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: Perf regression from scheduler load_balance rework in 5.5?
From

Hi,
在 2022/6/24 16:22, Vincent Guittot 写道:
> On Thu, 23 Jun 2022 at 21:50, David Chen <david.chen@nutanix.com> wrote:
>>
>> Hi,
>>
>> I'm working on upgrading our kernel from 4.14 to 5.10
>> However, I'm seeing performance regression when doing rand read from windows client through smbd
>> with a well cached file.
>>
>> One thing I noticed is that on the new kernel, the smbd thread doing socket I/O tends to stay on
>> the same cpu core as the net_rx softirq, where as in the old kernel it tends to be moved around
>> more randomly. And when they are on the same cpu, it tends to saturate the cpu more and causes
>> performance to drop.
>>
>> For example, here's the duration (ns) the thread spend on each cpu I captured using bpftrace
>> On 4.14:
>> @cputime[7]: 20741458382
>> @cputime[0]: 25219285005
>> @cputime[6]: 30892418441
>> @cputime[5]: 31032404613
>> @cputime[3]: 33511324691
>> @cputime[1]: 35564174562
>> @cputime[4]: 39313421965
>> @cputime[2]: 55779811909 (net_rx cpu)
>>
>> On 5.10:
>> @cputime[3]: 2150554823
>> @cputime[5]: 3294276626
>> @cputime[7]: 4277890448
>> @cputime[4]: 5094586003
>> @cputime[1]: 6058168291
>> @cputime[0]: 14688093441
>> @cputime[6]: 17578229533
>> @cputime[2]: 223473400411 (net_rx cpu)
>>
>> I also tried setting the cpu affinity of the smbd thread away from the net_rx cpu and indeed that
>> seems to bring the perf on par with old kernel.

I observed the same problem for the past two weeks.

>>
>> I noticed that there's scheduler load_balance rework in 5.5, so I did the test on 5.4 and 5.5 and
>> it did show the behavior changed between 5.4 and 5.5.
>
> Have you tested v5.18 ? several improvements happened since v5.5
>
>>
>> Anyone know how to work around this?
>
> Have you enabled IRQ_TIME_ACCOUNTING ?


CONFIG_IRQ_TIME_ACCOUNTING=y.

>
> When the time spent under interrupt becomes significant, scheduler
> migrate task on another cpu


My board has two cpus, and i used iperf3 to test upload bandwidth,then I saw the same situation,
the iperf3 thread run on the same cpu as the NET_RX softirq.

After debug in find_busiest_group(), i noticed when the cpu(env->idle is CPU_IDLE or CPU_NEWLY_IDLE) try to pull task,
the busiest->group_type == group_fully_busy, busiest->sum_h_nr_running == 1, local->group_type==group_has_spare,
and the loadbalance will failed at find_busiest_group(), as follows:

find_busiest_group():
...
if (busiest->group_type != group_overloaded) {
....
if (busiest->sum_h_nr_running == 1)
goto out_balanced; ----> loadbalance will returned at here.
....


Thanks,
Qiao


> Vincent>>
>> Thanks,
>> David
> .
>

\
 
 \ /
  Last update: 2022-06-24 15:17    [W:0.067 / U:0.440 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site