lkml.org 
[lkml]   [2015]   [May]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 3/3] context_tracking,x86: remove extraneous irq disable & enable from context tracking on syscall entry

* riel@redhat.com <riel@redhat.com> wrote:

> From: Rik van Riel <riel@redhat.com>
>
> On syscall entry with nohz_full on, we enable interrupts, call user_exit,
> disable interrupts, do something, re-enable interrupts, and go on our
> merry way.
>
> Profiling shows that a large amount of the nohz_full overhead comes
> from the extraneous disabling and re-enabling of interrupts. Andy
> suggested simply not enabling interrupts until after the context
> tracking code has done its thing, which allows us to skip a whole
> interrupt disable & re-enable cycle.
>
> This patch builds on top of these patches by Paolo:
> https://lkml.org/lkml/2015/4/28/188
> https://lkml.org/lkml/2015/4/29/139
>
> Together with this patch I posted earlier this week, the syscall path
> on a nohz_full cpu seems to be about 10% faster.
> https://lkml.org/lkml/2015/4/24/394
>
> My test is a simple microbenchmark that calls getpriority() in a loop
> 10 million times:
>
> run time system time
> vanilla 5.49s 2.08s
> __acct patch 5.21s 1.92s
> both patches 4.88s 1.71s

Just curious, what are the numbers if you don't have context tracking
enabled, i.e. without nohz_full?

I.e. what's the baseline we are talking about?

Thanks,

Ingo


\
 
 \ /
  Last update: 2015-05-01 09:01    [W:0.116 / U:0.516 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site