Messages in this thread Patch in this message | | | Date | Mon, 28 Feb 2022 15:07:05 +0900 | From | Masami Hiramatsu <> | Subject | Re: [PATCH v2 15/39] x86/ibt,kprobes: Fix more +0 assumptions |
| |
Hi Peter,
So, instead of this change, can you try below? This introduce the arch_adjust_kprobe_addr() and use it in the kprobe_addr() so that it can handle the case that user passed the probe address in _text+OFFSET format.
From: Masami Hiramatsu <mhiramat@kernel.org> Date: Mon, 28 Feb 2022 15:01:48 +0900 Subject: [PATCH] x86: kprobes: Skip ENDBR instruction probing
This adjust the kprobe probe address to skip the ENDBR and put the kprobe next to the ENDBR so that the kprobe doesn't disturb IBT.
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> --- arch/x86/kernel/kprobes/core.c | 7 +++++++ include/linux/kprobes.h | 2 ++ kernel/kprobes.c | 11 ++++++++++- 3 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index 745f42cf82dc..a90cfe50d800 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -52,6 +52,7 @@ #include <asm/insn.h> #include <asm/debugreg.h> #include <asm/set_memory.h> +"include <asm/ibt.h> #include "common.h" @@ -301,6 +302,12 @@ static int can_probe(unsigned long paddr) return (addr == paddr); } +/* If the x86 support IBT (ENDBR) it must be skipped. */ +kprobe_opcode_t *arch_adjust_kprobe_addr(unsigned long addr) +{ + return (kprobe_opcode_t *)skip_endbr((void *)addr); +} + /* * Copy an instruction with recovering modified instruction by kprobes * and adjust the displacement if the instruction uses the %rip-relative diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h index 19b884353b15..485d7832a613 100644 --- a/include/linux/kprobes.h +++ b/include/linux/kprobes.h @@ -384,6 +384,8 @@ static inline struct kprobe_ctlblk *get_kprobe_ctlblk(void) } kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset); +kprobe_opcode_t *arch_adjust_kprobe_addr(unsigned long addr); + int register_kprobe(struct kprobe *p); void unregister_kprobe(struct kprobe *p); int register_kprobes(struct kprobe **kps, int num); diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 94cab8c9ce56..312f10e85c93 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -1488,6 +1488,15 @@ bool within_kprobe_blacklist(unsigned long addr) return false; } +/* + * If the arch supports the feature like IBT which will put a trap at + * the entry of the symbol, it must be adjusted in this function. + */ +kprobe_opcode_t *__weak arch_adjust_kprobe_addr(unsigned long addr) +{ + return (kprobe_opcode_t *)addr; +} + /* * If 'symbol_name' is specified, look it up and add the 'offset' * to it. This way, we can specify a relative address to a symbol. @@ -1506,7 +1515,7 @@ static kprobe_opcode_t *_kprobe_addr(kprobe_opcode_t *addr, return ERR_PTR(-ENOENT); } - addr = (kprobe_opcode_t *)(((char *)addr) + offset); + addr = arch_adjust_kprobe_addr((unsigned long)addr + offset); if (addr) return addr; -- 2.25.1
On Thu, 24 Feb 2022 15:51:53 +0100 Peter Zijlstra <peterz@infradead.org> wrote:
> With IBT on, sym+0 is no longer the __fentry__ site. > > NOTE: the architecture has a special case and *does* allow placing an > INT3 breakpoint over ENDBR in which case #BP has precedence over #CP > and as such we don't need to disallow probing these instructions. > > NOTE: irrespective of the above; there is a complication in that > direct branches to functions are rewritten to not execute ENDBR, so > any breakpoint thereon might miss lots of actual function executions. > > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> > --- > arch/x86/kernel/kprobes/core.c | 11 +++++++++++ > kernel/kprobes.c | 15 ++++++++++++--- > 2 files changed, 23 insertions(+), 3 deletions(-) > > --- a/arch/x86/kernel/kprobes/core.c > +++ b/arch/x86/kernel/kprobes/core.c > @@ -1156,3 +1162,8 @@ int arch_trampoline_kprobe(struct kprobe > { > return 0; > } > + > +bool arch_kprobe_on_func_entry(unsigned long offset) > +{ > + return offset <= 4*HAS_KERNEL_IBT; > +} > --- a/kernel/kprobes.c > +++ b/kernel/kprobes.c > @@ -67,10 +67,19 @@ static bool kprobes_all_disarmed; > static DEFINE_MUTEX(kprobe_mutex); > static DEFINE_PER_CPU(struct kprobe *, kprobe_instance); > > -kprobe_opcode_t * __weak kprobe_lookup_name(const char *name, > - unsigned int __unused) > +kprobe_opcode_t * __weak kprobe_lookup_name(const char *name, unsigned int offset) > { > - return ((kprobe_opcode_t *)(kallsyms_lookup_name(name))); > + kprobe_opcode_t *addr = NULL; > + > + addr = ((kprobe_opcode_t *)(kallsyms_lookup_name(name))); > +#ifdef CONFIG_KPROBES_ON_FTRACE > + if (addr && !offset) { > + unsigned long faddr = ftrace_location((unsigned long)addr); > + if (faddr) > + addr = (kprobe_opcode_t *)faddr; > + } > +#endif > + return addr; > } > > /* > >
-- Masami Hiramatsu <mhiramat@kernel.org>
| |