Messages in this thread | | | Date | Fri, 25 Feb 2022 10:32:15 +0900 | From | Masami Hiramatsu <> | Subject | Re: [PATCH v2 15/39] x86/ibt,kprobes: Fix more +0 assumptions |
| |
Hi Peter,
On Thu, 24 Feb 2022 15:51:53 +0100 Peter Zijlstra <peterz@infradead.org> wrote:
> With IBT on, sym+0 is no longer the __fentry__ site. > > NOTE: the architecture has a special case and *does* allow placing an > INT3 breakpoint over ENDBR in which case #BP has precedence over #CP > and as such we don't need to disallow probing these instructions.
Does this mean we can still putting a probe on sym+0??
If so, NAK this patch, since the KPROBES_ON_FTRACE is not meaning to accelerate the function entry probe, but just allows user to put a probe on 'call _mcount' (which can be modified by ftrace).
func: endbr <- sym+0 : INT3 is used. (kp->addr = func+0) nop5 <- sym+4? : ftrace is used. (kp->addr = func+4?) ...
And anyway, in some case (e.g. perf probe) symbol will be a basement symbol like '_text' and @offset will be the function addr - _text addr so that we can put a probe on local-scope function.
If you think we should not probe on the endbr, we should treat the pair of endbr and nop5 (or call _mcount) instructions as a virtual single instruction. This means kp->addr should point sym+0, but use ftrace to probe.
func: endbr <- sym+0 : ftrace is used. (kp->addr = func+0) nop5 <- sym+4? : This is not able to be probed. ...
Thank you,
> > NOTE: irrespective of the above; there is a complication in that > direct branches to functions are rewritten to not execute ENDBR, so > any breakpoint thereon might miss lots of actual function executions. > > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> > --- > arch/x86/kernel/kprobes/core.c | 11 +++++++++++ > kernel/kprobes.c | 15 ++++++++++++--- > 2 files changed, 23 insertions(+), 3 deletions(-) > > --- a/arch/x86/kernel/kprobes/core.c > +++ b/arch/x86/kernel/kprobes/core.c > @@ -1156,3 +1162,8 @@ int arch_trampoline_kprobe(struct kprobe > { > return 0; > } > + > +bool arch_kprobe_on_func_entry(unsigned long offset) > +{ > + return offset <= 4*HAS_KERNEL_IBT; > +} > --- a/kernel/kprobes.c > +++ b/kernel/kprobes.c > @@ -67,10 +67,19 @@ static bool kprobes_all_disarmed; > static DEFINE_MUTEX(kprobe_mutex); > static DEFINE_PER_CPU(struct kprobe *, kprobe_instance); > > -kprobe_opcode_t * __weak kprobe_lookup_name(const char *name, > - unsigned int __unused) > +kprobe_opcode_t * __weak kprobe_lookup_name(const char *name, unsigned int offset) > { > - return ((kprobe_opcode_t *)(kallsyms_lookup_name(name))); > + kprobe_opcode_t *addr = NULL; > + > + addr = ((kprobe_opcode_t *)(kallsyms_lookup_name(name))); > +#ifdef CONFIG_KPROBES_ON_FTRACE > + if (addr && !offset) { > + unsigned long faddr = ftrace_location((unsigned long)addr); > + if (faddr) > + addr = (kprobe_opcode_t *)faddr; > + } > +#endif > + return addr; > } > > /* > >
-- Masami Hiramatsu <mhiramat@kernel.org>
| |