lkml.org 
[lkml]   [2022]   [Oct]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH] sched: consider WF_SYNC to find idle siblings
On Mon, Oct 31, 2022 at 5:57 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Thu, Oct 27, 2022 at 01:26:03PM -0700, Andrei Vagin wrote:
> > From: Andrei Vagin <avagin@gmail.com>
> >
> > WF_SYNC means that the waker goes to sleep after wakeup, so the current
> > cpu can be considered idle if the waker is the only process that is
> > running on it.
> >
> > The perf pipe benchmark shows that this change reduces the average time
> > per operation from 8.8 usecs/op to 3.7 usecs/op.
> >
> > Before:
> > $ ./tools/perf/perf bench sched pipe
> > # Running 'sched/pipe' benchmark:
> > # Executed 1000000 pipe operations between two processes
> >
> > Total time: 8.813 [sec]
> >
> > 8.813985 usecs/op
> > 113456 ops/sec
> >
> > After:
> > $ ./tools/perf/perf bench sched pipe
> > # Running 'sched/pipe' benchmark:
> > # Executed 1000000 pipe operations between two processes
> >
> > Total time: 3.743 [sec]
> >
> > 3.743971 usecs/op
> > 267096 ops/sec
>
> But what; if anything, does it do for the myrad of other benchmarks we
> run?

I've run these set of benchmarks:
* perf bench sched messaging
* perf bench epoll all
* perf bench futex all
* schbench
* tbench
* kernel compilation

Results look the same with and without this change for all benchmarks except
tbench. tbench shows improvements when a number of processes is less than a
number of cpu-s.

Here are results from my test host with 8 cpu-s.

$ tbench_srv & "tbench" "-t" "15" "1" "127.0.0.1"
Before: Throughput 260.498 MB/sec 1 clients 1 procs max_latency=1.301 ms
After: Throughput 462.047 MB/sec 1 clients 1 procs max_latency=1.066 ms

$ tbench_srv & "tbench" "-t" "15" "4" "127.0.0.1"
Before: Throughput 733.44 MB/sec 4 clients 4 procs max_latency=0.935 ms
After: Throughput 1778.94 MB/sec 4 clients 4 procs max_latency=0.882 ms

$ tbench_srv & "tbench" "-t" "15" "8" "127.0.0.1"
Before: Throughput 1965.41 MB/sec 8 clients 8 procs max_latency=2.145 ms
After: Throughput 2002.96 MB/sec 8 clients 8 procs max_latency=1.881 ms

$ tbench_srv & "tbench" "-t" "15" "32" "127.0.0.1"
Before: Throughput 1881.79 MB/sec 32 clients 32 procs max_latency=16.365 ms
After: Throughput 1891.87 MB/sec 32 clients 32 procs max_latency=4.050 ms

Let me know if you want to see results for any other specific benchmark.

Thanks,
Andrei

\
 
 \ /
  Last update: 2022-10-31 23:37    [W:0.084 / U:1.492 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site