Messages in this thread | | | Date | Mon, 24 Oct 2022 22:00:08 +0900 | From | Masami Hiramatsu (Google) <> | Subject | Re: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64 |
| |
On Fri, 21 Oct 2022 18:49:38 +0200 Florent Revest <revest@chromium.org> wrote:
> On Fri, Oct 21, 2022 at 1:32 PM Masami Hiramatsu <mhiramat@kernel.org> wrote: > > On Mon, 17 Oct 2022 19:55:06 +0200 > > Florent Revest <revest@chromium.org> wrote: > > > Mark finished an implementation of his per-callsite-ops and min-args > > > branches (meaning that we can now skip the expensive ftrace's saving > > > of all registers and iteration over all ops if only one is attached) > > > - https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64-ftrace-call-ops-20221017 > > > > > > And Masami wrote similar patches to what I had originally done to > > > fprobe in my branch: > > > - https://github.com/mhiramat/linux/commits/kprobes/fprobe-update > > > > > > So I could rebase my previous "bpf on fprobe" branch on top of these: > > > (as before, it's just good enough for benchmarking and to give a > > > general sense of the idea, not for a thorough code review): > > > - https://github.com/FlorentRevest/linux/commits/fprobe-min-args-3 > > > > > > And I could run the benchmarks against my rpi4. I have different > > > baseline numbers as Xu so I ran everything again and tried to keep the > > > format the same. "indirect call" refers to my branch I just linked and > > > "direct call" refers to the series this is a reply to (Xu's work) > > > > Thanks for sharing the measurement results. Yes, fprobes/rethook > > implementation is just porting the kretprobes implementation, thus > > it may not be so optimized. > > > > BTW, I remember Wuqiang's patch for kretprobes. > > > > https://lore.kernel.org/all/20210830173324.32507-1-wuqiang.matt@bytedance.com/T/#u > > Oh that's a great idea, thanks for pointing it out Masami! > > > This is for the scalability fixing, but may possible to improve > > the performance a bit. It is not hard to port to the recent kernel. > > Can you try it too? > > I rebased it on my branch > https://github.com/FlorentRevest/linux/commits/fprobe-min-args-3 > > And I got measurements again. Unfortunately it looks like this does not help :/ > > New benchmark results: https://paste.debian.net/1257856/ > New perf report: https://paste.debian.net/1257859/
Hmm, OK. That is only for the scalability.
> > The fprobe based approach is still significantly slower than the > direct call approach. > > > Anyway, eventually, I would like to remove the current kretprobe > > based implementation and unify fexit hook with function-graph > > tracer. It should make more better perfromance on it. > > That makes sense. :) How do you imagine the unified solution ? > Would both the fgraph and fprobe APIs keep existing but under the hood > one would be implemented on the other ? (or would one be gone ?) Would > we replace the rethook freelist with the function graph's per-task > shadow stacks ? (or the other way around ?))
Yes, that's right. As far as using a global object pool, there must be a performance bottleneck to pick up an object and returning the object to the pool. Per-CPU pool may give a better performance but more complicated to balance pools. Per-task shadow stack will solve it. So I plan to expand fgraph API and use it in fprobe instead of rethook. (I planned to re-implement rethook, but I realized that it has more issue than I thought.)
> > > Note that I can't really make sense of the perf report with indirect > > > calls. it always reports it spent 12% of the time in > > > rethook_trampoline_handler but I verified with both a WARN in that > > > function and a breakpoint with a debugger, this function does *not* > > > get called when running this "bench trig-fentry" benchmark. Also it > > > wouldn't make sense for fprobe_handler to call it so I'm quite > > > confused why perf would report this call and such a long time spent > > > there. Anyone know what I could be missing here ? > > I made slight progress on this. If I put the vmlinux file in the cwd > where I run perf report, the reports no longer contain references to > rethook_trampoline_handler. Instead, they have a few > 0xffff800008xxxxxx addresses under fprobe_handler. (like in the > pastebin I just linked) > > It's still pretty weird because that range is the vmalloc area on > arm64 and I don't understand why anything under fprobe_handler would > execute there. However, I'm also definitely sure that these 12% are > actually spent getting buffers from the rethook memory pool because if > I replace rethook_try_get and rethook_recycle calls with the usage of > a dummy static bss buffer (for the sake of benchmarking the > "theoretical best case scenario") these weird perf report traces are > gone and the 12% are saved. https://paste.debian.net/1257862/
Yeah, I understand that. Rethook (and kretprobes) is not designed for such heavy workload.
> This is why I would be interested in seeing rethook's memory pool > reimplemented on top of something like > https://lwn.net/Articles/788923/ If we get closer to the performance > of the the theoretical best case scenario where getting a blob of > memory is ~free (and I think it could be the case with a per task > shadow stack like fgraph's), then a bpf on fprobe implementation would > start to approach the performances of a direct called trampoline on > arm64: https://paste.debian.net/1257863/
OK, I think we are on the same page and same direction.
Thank you,
-- Masami Hiramatsu (Google) <mhiramat@kernel.org>
| |