lkml.org 
[lkml]   [2019]   [Dec]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
SubjectRe: [PATCH v7 3/3] arm64: implement KPROBES_ON_FTRACE
Date
On Thu, 26 Dec 2019 03:18:07 +0000 Jisheng Zhang wrote:


>
>
> Hi
>
> On Thu, 26 Dec 2019 11:57:07 +0900 Masami Hiramatsu wrote:
>
> >
> > Hi Jisheng,
> >
> > On Wed, 25 Dec 2019 09:44:21 +0000
> > Jisheng Zhang <Jisheng.Zhang@synaptics.com> wrote:
> >
> > > KPROBES_ON_FTRACE avoids much of the overhead with regular kprobes as it
> > > eliminates the need for a trap, as well as the need to emulate or
> > > single-step instructions.
> > >
> > > Tested on berlin arm64 platform.
> > >
> > > ~ # mount -t debugfs debugfs /sys/kernel/debug/
> > > ~ # cd /sys/kernel/debug/
> > > /sys/kernel/debug # echo 'p _do_fork' > tracing/kprobe_events
> > >
> > > before the patch:
> > >
> > > /sys/kernel/debug # cat kprobes/list
> > > ffffff801009fe28 k _do_fork+0x0 [DISABLED]
> > >
> > > after the patch:
> > >
> > > /sys/kernel/debug # cat kprobes/list
> > > ffffff801009ff54 k _do_fork+0x0 [DISABLED][FTRACE]
> >
> > What happens if user puts a probe on _do_fork+4?
> > Is that return -EILSEQ correctly?
>
> _do_fork+4 can be probed successfully.
>
> >
> > >
> > > Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
> > > ---
> > > .../debug/kprobes-on-ftrace/arch-support.txt | 2 +-
> > > arch/arm64/Kconfig | 1 +
> > > arch/arm64/include/asm/ftrace.h | 1 +
> > > arch/arm64/kernel/probes/Makefile | 1 +
> > > arch/arm64/kernel/probes/ftrace.c | 78 +++++++++++++++++++
> > > 5 files changed, 82 insertions(+), 1 deletion(-)
> > > create mode 100644 arch/arm64/kernel/probes/ftrace.c
> > >
> > > diff --git a/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt b/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
> > > index 4fae0464ddff..f9dd9dd91e0c 100644
> > > --- a/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
> > > +++ b/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
> > > @@ -9,7 +9,7 @@
> > > | alpha: | TODO |
> > > | arc: | TODO |
> > > | arm: | TODO |
> > > - | arm64: | TODO |
> > > + | arm64: | ok |
> > > | c6x: | TODO |
> > > | csky: | TODO |
> > > | h8300: | TODO |
> > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > > index b1b4476ddb83..92b9882889ac 100644
> > > --- a/arch/arm64/Kconfig
> > > +++ b/arch/arm64/Kconfig
> > > @@ -166,6 +166,7 @@ config ARM64
> > > select HAVE_STACKPROTECTOR
> > > select HAVE_SYSCALL_TRACEPOINTS
> > > select HAVE_KPROBES
> > > + select HAVE_KPROBES_ON_FTRACE
> > > select HAVE_KRETPROBES
> > > select HAVE_GENERIC_VDSO
> > > select IOMMU_DMA if IOMMU_SUPPORT
> > > diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h
> > > index 91fa4baa1a93..875aeb839654 100644
> > > --- a/arch/arm64/include/asm/ftrace.h
> > > +++ b/arch/arm64/include/asm/ftrace.h
> > > @@ -20,6 +20,7 @@
> > >
> > > /* The BL at the callsite's adjusted rec->ip */
> > > #define MCOUNT_INSN_SIZE AARCH64_INSN_SIZE
> > > +#define FTRACE_IP_EXTENSION MCOUNT_INSN_SIZE
> > >
> > > #define FTRACE_PLT_IDX 0
> > > #define FTRACE_REGS_PLT_IDX 1
> > > diff --git a/arch/arm64/kernel/probes/Makefile b/arch/arm64/kernel/probes/Makefile
> > > index 8e4be92e25b1..4020cfc66564 100644
> > > --- a/arch/arm64/kernel/probes/Makefile
> > > +++ b/arch/arm64/kernel/probes/Makefile
> > > @@ -4,3 +4,4 @@ obj-$(CONFIG_KPROBES) += kprobes.o decode-insn.o \
> > > simulate-insn.o
> > > obj-$(CONFIG_UPROBES) += uprobes.o decode-insn.o \
> > > simulate-insn.o
> > > +obj-$(CONFIG_KPROBES_ON_FTRACE) += ftrace.o
> > > diff --git a/arch/arm64/kernel/probes/ftrace.c b/arch/arm64/kernel/probes/ftrace.c
> > > new file mode 100644
> > > index 000000000000..0643aa2dacdb
> > > --- /dev/null
> > > +++ b/arch/arm64/kernel/probes/ftrace.c
> > > @@ -0,0 +1,78 @@
> > > +// SPDX-License-Identifier: GPL-2.0-or-later
> > > +/*
> > > + * Dynamic Ftrace based Kprobes Optimization
> > > + *
> > > + * Copyright (C) Hitachi Ltd., 2012
> > > + * Copyright (C) 2019 Jisheng Zhang <jszhang@kernel.org>
> > > + * Synaptics Incorporated
> > > + */
> > > +
> > > +#include <linux/kprobes.h>
> > > +
> > > +/*
> > > + * In arm64 FTRACE_WITH_REGS implementation, we patch two nop instructions:
> > > + * the lr saver and bl ftrace-entry. Both these instructions are claimed
> > > + * by ftrace and we should allow probing on either instruction.
> > No, the 2nd bl ftrace-entry must not be probed.
> > The pair of lr-saver and bl ftrace-entry is tightly coupled. You can not
> > decouple it.
>
> This is the key. different viewing of this results in different implementation.
> I'm just wondering why are the two instructions considered as coupled. I think
> here we met similar situation as powerpc: https://lkml.org/lkml/2019/6/18/646
> the "mflr r0" equals to lr-saver here, branch to _mcount equals to bl ftrace-entry
> could you please kindly comment more?
>
> Thanks in advance
>

hmm, I think I may get some part of your opinion. In v7 implementation:

if probe on func+4, that's bl ftrace-entry, similar as mcount call on
other architectures, we allow this probe as normal.

if probe on func+0, the first param ip in kprobe_ftrace_handler() points
to func+4(this is adjusted by ftrace), regs->ip points to func+8, so in
kprobe_ftrace_handler() we modify regs->ip to func+0 to call kprobe
pre handler, then modify regs->ip to func+8 to call kprobe post handler.
As can be seen, the first two instructions are considered as a virtual
mcount call. From this point of view, lr saver and the bl <ftrace-entry>
is coupled.

If we split patch3 into two:
one to support kprobes func+4
the second to support kprobe on func+0
it would be much clearer.

Then the key here is whether we could allow both kprobes on func+0 and func+4

Thanks

\
 
 \ /
  Last update: 2019-12-26 05:29    [W:0.071 / U:0.172 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site