lkml.org 
[lkml]   [2013]   [Sep]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/7] preempt_count rework -v2
On Tue, Sep 10, 2013 at 03:56:36PM +0200, Ingo Molnar wrote:
> * Ingo Molnar <mingo@kernel.org> wrote:

> > > * ffffffff8106f42a: 65 ff 0c 25 e0 b7 00 decl %gs:0xb7e0
> > > ffffffff8106f431: 00
> > > * ffffffff8106f432: 0f 94 c0 sete %al
> > > * ffffffff8106f435: 84 c0 test %al,%al
> > > * ffffffff8106f437: 75 02 jne ffffffff8106f43b <kick_process+0x4b>
>
> Correction, so this comes from the new x86-specific optimization:
>
> +static __always_inline bool __preempt_count_dec_and_test(void)
> +{
> + unsigned char c;
> +
> + asm ("decl " __percpu_arg(0) "; sete %1"
> + : "+m" (__preempt_count), "=qm" (c));
> +
> + return c != 0;
> +}
>
> And that's where the sete and test originates from.

Correct, used in:

#define preempt_enable() \
do { \
barrier(); \
if (unlikely(preempt_count_dec_and_test())) \
__preempt_schedule(); \
} while (0)

> Couldn't it be improved by merging the preempt_schedule() call into a new
> primitive, keeping the call in the regular flow, or using section tricks
> to move it out of line? The scheduling case is a slowpath in most cases.

Not if we want to keep using the GCC unlikely thing afaik. That said,
all this inline asm stuff is isn't my strong point, so maybe someone
else has a good idea.

But I really think fixing GCC would be good, as we have the same pattern
with all *_and_test() functions.



\
 
 \ /
  Last update: 2013-09-10 17:21    [W:0.278 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site