lkml.org 
[lkml]   [2020]   [May]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Subject[RFC PATCH v2 0/3] kprobes: Support nested kprobes
Date
Hi,

Here is the 2nd version of the series to add nested-kprobes support
to x86, arm64 and arm. This makes kprobes to accept 1-level nesting
instead of incrementing missed count.

In this version, I fixed a mistake for the kprobes on ftrace on x86
and dump nested probes when we detect an unrecoverable kprobe.

Nested Kprobes
--------------

Any kprobes hits in kprobes pre/post handler context can be nested
at once. If the other kprobes hits in the nested pre/post handler
context or in the single-stepping context, that will be still
missed.

The nest level is actually easily extended, but too many nest
level can lead the overflow of the kernel stack (for each nest,
the stack will be consumed by saving registers, handling kprobes
and pre/post handlers.) Thus, at this moment it allows only one
level nest.

This feature allows BPF or ftrace user to put a kprobe on BPF
jited code or ftrace internal code running in the kprobe context
for debugging.

We can test this feature on the kernel with
CONFIG_KPROBE_EVENTS_ON_NOTRACE=y as below.

# cd /sys/kernel/debug/tracing
# echo p ring_buffer_lock_reserve > kprobe_events
# echo p vfs_read >> kprobe_events
# echo stacktrace > events/kprobes/p_ring_buffer_lock_reserve_0/trigger
# echo 1 > events/kprobes/enable
# cat trace
...
cat-151 [000] ...1 48.669190: p_vfs_read_0: (vfs_read+0x0/0x160)
cat-151 [000] ...2 48.669276: p_ring_buffer_lock_reserve_0: (ring_buffer_lock_reserve+0x0/0x400)
cat-151 [000] ...2 48.669288: <stack trace>
=> kprobe_dispatcher
=> opt_pre_handler
=> optimized_callback
=> 0xffffffffa0002331
=> ring_buffer_lock_reserve
=> kprobe_trace_func
=> kprobe_dispatcher
=> opt_pre_handler
=> optimized_callback
=> 0xffffffffa00023b0
=> vfs_read
=> load_elf_phdrs
=> load_elf_binary
=> search_binary_handler.part.0
=> __do_execve_file.isra.0
=> __x64_sys_execve
=> do_syscall_64
=> entry_SYSCALL_64_after_hwframe

To check unoptimized code, disable optprobe and dump the log.

# echo 0 > /proc/sys/debug/kprobes-optimization
# echo > trace
# cat trace
cat-153 [000] d..1 140.581433: p_vfs_read_0: (vfs_read+0x0/0x160)
cat-153 [000] d..2 140.581780: p_ring_buffer_lock_reserve_0: (ring_buffer_lock_reserve+0x0/0x400)
cat-153 [000] d..2 140.581811: <stack trace>
=> kprobe_dispatcher
=> aggr_pre_handler
=> kprobe_int3_handler
=> do_int3
=> int3
=> ring_buffer_lock_reserve
=> kprobe_trace_func
=> kprobe_dispatcher
=> aggr_pre_handler
=> kprobe_int3_handler
=> do_int3
=> int3
=> vfs_read
=> load_elf_phdrs
=> load_elf_binary
=> search_binary_handler.part.0
=> __do_execve_file.isra.0
=> __x64_sys_execve
=> do_syscall_64
=> entry_SYSCALL_64_after_hwframe

So we can see the kprobe can be nested.

Thank you,

---

Masami Hiramatsu (3):
x86/kprobes: Support nested kprobes
arm64: kprobes: Support nested kprobes
arm: kprobes: Support nested kprobes


arch/arm/include/asm/kprobes.h | 5 +-
arch/arm/probes/kprobes/core.c | 83 +++++++++++++++---------------
arch/arm/probes/kprobes/core.h | 30 +++++++++++
arch/arm/probes/kprobes/opt-arm.c | 6 +-
arch/arm64/include/asm/kprobes.h | 5 +-
arch/arm64/kernel/probes/kprobes.c | 79 +++++++++++++++++-----------
arch/x86/include/asm/kprobes.h | 5 +-
arch/x86/kernel/kprobes/common.h | 39 +++++++++++++-
arch/x86/kernel/kprobes/core.c | 100 ++++++++++++++++--------------------
arch/x86/kernel/kprobes/ftrace.c | 6 +-
arch/x86/kernel/kprobes/opt.c | 13 +++--
kernel/kprobes.c | 1
12 files changed, 226 insertions(+), 146 deletions(-)

--
Masami Hiramatsu (Linaro) <mhiramat@kernel.org>

\
 
 \ /
  Last update: 2020-05-08 16:25    [W:0.024 / U:0.236 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site