lkml.org 
[lkml]   [2020]   [Nov]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RFC][PATCH v2 00/21] x86/pti: Defer CR3 switch to C code
From
Date

On 11/18/20 10:30 AM, David Laight wrote:
> From: Alexandre Chartre
>> Sent: 18 November 2020 07:42
>>
>>
>> On 11/17/20 10:26 PM, Borislav Petkov wrote:
>>> On Tue, Nov 17, 2020 at 07:12:07PM +0100, Alexandre Chartre wrote:
>>>> Some benchmarks are available, in particular from phoronix:
>>>
>>> What I was expecting was benchmarks *you* have run which show that
>>> perf penalty, not something one can find quickly on the internet and
>>> something one cannot always reproduce her-/himself.
>>>
>>> You do know that presenting convincing numbers with a patchset greatly
>>> improves its chances of getting it upstreamed, right?
>>>
>>
>> Well, it looks like I wrongfully assume that KPTI was a well known performance
>> overhead since it was introduced (because it adds extra page-table switches),
>> but you are right I should be presenting my own numbers.
>
> IIRC the penalty comes from the page table switch.
> Doing it at a different time is unlikely to make much difference.
>

Correct, this RFC is not changing the overhead. However, it is a step forward
for being able to execute some selected syscalls or interrupt handlers without
switching to the kernel page-table. The next step would be to identify and add
the necessary mapping to the user page-table so that specified syscalls can be
executed without switching the page-table.


> For some workloads the penalty is massive - getting on for 50%.
> We are still using old kernels on AWS.
>

Here are some micro benchmarks of the getppid and getpid syscalls which highlight
the PTI overhead. This uses the kernel tools/perf command, and the getpid command
from libMICRO (https://github.com/redhat-performance/libMicro):

system running 5.10-rc4 booted with nopti:
------------------------------------------

# perf bench syscall basic
# Running 'syscall/basic' benchmark:
# Executed 10000000 getppid() calls
Total time: 0.792 [sec]

0.079223 usecs/op
12622549 ops/sec

# getpid -B 100000
prc thr usecs/call samples errors cnt/samp
getpid 1 1 0.08029 102 0 100000


We can see that getpid and getppid syscall have the same execution
time around 0.08 usecs. These syscalls are very small and just return
a value, so the time is mostly spent entering/exiting the kernel.


same system booted with pti:
----------------------------

# perf bench syscall basic
# Running 'syscall/basic' benchmark:
# Executed 10000000 getppid() calls
Total time: 2.025 [sec]

0.202527 usecs/op
4937605 ops/sec

# getpid -B 100000
prc thr usecs/call samples errors cnt/samp
getpid 1 1 0.20241 102 0 100000


With PTI, the execution time jumps to 0.20 usecs (+0.12 usecs = +150%).

That's a very extreme case because these are very small syscalls, and
in that case the overhead to switch page-tables is significant compared
to the execution time of the syscall.

So with an overhead of +0.12 usecs per syscall, the PTI impact is significant
with workload which uses a lot of short syscalls. But if you use longer syscalls,
for example with an average execution time of 2.0 usecs per syscall then you
have a lower overhead of 6%.

alex.

\
 
 \ /
  Last update: 2020-11-18 11:29    [W:1.312 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site