lkml.org 
[lkml]   [2022]   [Nov]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH] sched: consider WF_SYNC to find idle siblings
On Tue, Nov 1, 2022 at 2:42 AM Mel Gorman <mgorman@suse.de> wrote:
>
> On Thu, Oct 27, 2022 at 01:26:03PM -0700, Andrei Vagin wrote:
> > From: Andrei Vagin <avagin@gmail.com>
> >
> > WF_SYNC means that the waker goes to sleep after wakeup, so the current
> > cpu can be considered idle if the waker is the only process that is
> > running on it.
> >
> > The perf pipe benchmark shows that this change reduces the average time
> > per operation from 8.8 usecs/op to 3.7 usecs/op.
> >
> > Before:
> > $ ./tools/perf/perf bench sched pipe
> > # Running 'sched/pipe' benchmark:
> > # Executed 1000000 pipe operations between two processes
> >
> > Total time: 8.813 [sec]
> >
> > 8.813985 usecs/op
> > 113456 ops/sec
> >
> > After:
> > $ ./tools/perf/perf bench sched pipe
> > # Running 'sched/pipe' benchmark:
> > # Executed 1000000 pipe operations between two processes
> >
> > Total time: 3.743 [sec]
> >
> > 3.743971 usecs/op
> > 267096 ops/sec
> >
>
> The WF_SYNC hint in unreliable as the waking process does not always
> go to sleep immediately. While it's great for a benchmark like a pipe
> benchmark as the relationship is strictly synchronous, it does not work
> out as well for networking which can use WF_SYNC for wakeups but either
> multiple tasks are being woken up or the waker does not go to sleep as
> there is sufficient inbound traffic to keep it awake.

This change should work fine when we wake up multiple tasks. If the waker
doesn't go to sleep, it sounds like a misuse of WF_SYNC. For example,
wake_affine_idle contains the same check like introduced in this
patch. At the first
glance, wake_affine_weight handles WF_SYNC incorrectly in this case too.

As for benchmarks, tbench shows much better numbers with this change:

$ tbench_srv & "tbench" "-t" "15" "4" "127.0.0.1"
Before: Throughput 733.44 MB/sec 4 clients 4 procs max_latency=0.935 ms
After: Throughput 1778.94 MB/sec 4 clients 4 procs max_latency=0.882 ms

I know it is just another synchronous benchmark...

I am working on the synchronous mode of seccom user notifies[1]. In the
first two versions, I used the WF_CURRENT_CPU [2] flag that has been borrowed
from the umcg patchset [3]. But when I was preparing the third version of the
patchset, I wondered why WF_SYNC didn't work in this case and ended up with this
patch. For the seccom patchset, fast synchronous context switches are the most
critical part, so any advice on how to do that properly are welcome.

[1] https://lore.kernel.org/lkml/20221020011048.156415-1-avagin@gmail.com/T/
[2] https://lore.kernel.org/lkml/20221020011048.156415-1-avagin@gmail.com/T/#m8a597d43764aa8ded2788ea7ce4276f9045668d1
[3] https://lkml.iu.edu/hypermail/linux/kernel/2111.0/04473.html

Thanks,
Andrei
> There used to be
> an attempt to track how accurate WF_SYNC was, using avg_overlap I think,
> but it was ultimately removed.
>
> --
> Mel Gorman
> SUSE Labs

\
 
 \ /
  Last update: 2022-11-02 01:19    [W:0.738 / U:3.988 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site