lkml.org 
[lkml]   [2018]   [Jul]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH v2] tools/memory-model: Add extra ordering for locks and remove it for ordinary release/acquire
Date
Linus Torvalds <torvalds@linux-foundation.org> writes:
> On Mon, Jul 16, 2018 at 7:40 AM Michael Ellerman <mpe@ellerman.id.au> wrote:
...
>> I guess arguably it's not a very macro benchmark, but we have a
>> context_switch benchmark in the tree[1] which we often use to tune
>> things, and it degrades badly. It just spins up two threads and has them
>> ping-pong using yield.
>
> I hacked that up to run on x86, and it only is about 5% locking
> overhead in my profiles. It's about 18% __switch_to, and a lot of
> system call entry/exit, but not a lot of locking.

Interesting. I don't see anything as high as 18%, it's more spread out:

7.81% context_switch [kernel.kallsyms] [k] cgroup_rstat_updated
7.60% context_switch [kernel.kallsyms] [k] system_call_exit
5.91% context_switch [kernel.kallsyms] [k] __switch_to
5.69% context_switch [kernel.kallsyms] [k] __sched_text_start
5.61% context_switch [kernel.kallsyms] [k] _raw_spin_lock
4.15% context_switch [kernel.kallsyms] [k] system_call
3.76% context_switch [kernel.kallsyms] [k] finish_task_switch

And it doesn't change much before/after the spinlock change.

(I should work out how to turn that cgroup stuff off.)

I tried uninlining spin_unlock() and that makes it a bit clearer.

Before:
9.67% context_switch [kernel.kallsyms] [k] _raw_spin_lock
7.74% context_switch [kernel.kallsyms] [k] cgroup_rstat_updated
7.39% context_switch [kernel.kallsyms] [k] system_call_exit
5.84% context_switch [kernel.kallsyms] [k] __sched_text_start
4.83% context_switch [kernel.kallsyms] [k] __switch_to
4.08% context_switch [kernel.kallsyms] [k] system_call
<snip 16 lines>
1.24% context_switch [kernel.kallsyms] [k] arch_spin_unlock <--

After:
8.69% context_switch [kernel.kallsyms] [k] _raw_spin_lock
7.01% context_switch [kernel.kallsyms] [k] cgroup_rstat_updated
6.76% context_switch [kernel.kallsyms] [k] system_call_exit
5.59% context_switch [kernel.kallsyms] [k] arch_spin_unlock <--
5.10% context_switch [kernel.kallsyms] [k] __sched_text_start
4.36% context_switch [kernel.kallsyms] [k] __switch_to
3.80% context_switch [kernel.kallsyms] [k] system_call


I was worried spectre/meltdown mitigations might be confusing things, but not
really, updated numbers with them off are higher but the delta is about the
same in percentage terms:

| lwsync/lwsync | lwsync/sync | Change | Change %
+---------------+-------------+------------+----------
Average | 47,938,888 | 43,655,184 | -4,283,703 | -9.00%


> I'm actually surprised it is even that much locking, since it seems to
> be single-cpu, so there should be no contention and the lock (which
> seems to be
>
> rq = this_rq();
> rq_lock(rq, &rf);
>
> in do_sched_yield()) should stay local to the cpu.
>
> And for you the locking is apparently even _more_ noticeable.

> But yes, a 10% regression on that context switch thing is huge. You
> shouldn't do ping-pong stuff, but people kind of do.

Yeah.

There also seem to be folks who have optimised the rest of their stack pretty
hard, and therefore care about context switch performance because it's pure
overhead and they're searching for every cycle.

So although this test is not a real workload it's a proxy for something people
do complain to us about.

cheers

\
 
 \ /
  Last update: 2018-07-17 16:46    [W:1.318 / U:0.004 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site