Messages in this thread | | | Date | Thu, 10 Nov 2022 12:58:54 +0800 | Subject | Re: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64 | From | wuqiang <> |
| |
On 2022/10/22 00:49, Florent Revest wrote: > On Fri, Oct 21, 2022 at 1:32 PM Masami Hiramatsu <mhiramat@kernel.org> wrote: >> On Mon, 17 Oct 2022 19:55:06 +0200 >> Florent Revest <revest@chromium.org> wrote: >>> Mark finished an implementation of his per-callsite-ops and min-args >>> branches (meaning that we can now skip the expensive ftrace's saving >>> of all registers and iteration over all ops if only one is attached) >>> - https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64-ftrace-call-ops-20221017 >>> >>> And Masami wrote similar patches to what I had originally done to >>> fprobe in my branch: >>> - https://github.com/mhiramat/linux/commits/kprobes/fprobe-update >>> >>> So I could rebase my previous "bpf on fprobe" branch on top of these: >>> (as before, it's just good enough for benchmarking and to give a >>> general sense of the idea, not for a thorough code review): >>> - https://github.com/FlorentRevest/linux/commits/fprobe-min-args-3 >>> >>> And I could run the benchmarks against my rpi4. I have different >>> baseline numbers as Xu so I ran everything again and tried to keep the >>> format the same. "indirect call" refers to my branch I just linked and >>> "direct call" refers to the series this is a reply to (Xu's work) >> >> Thanks for sharing the measurement results. Yes, fprobes/rethook >> implementation is just porting the kretprobes implementation, thus >> it may not be so optimized. >> >> BTW, I remember Wuqiang's patch for kretprobes. >> >> https://lore.kernel.org/all/20210830173324.32507-1-wuqiang.matt@bytedance.com/T/#u > > Oh that's a great idea, thanks for pointing it out Masami! > >> This is for the scalability fixing, but may possible to improve >> the performance a bit. It is not hard to port to the recent kernel. >> Can you try it too? > > I rebased it on my branch > https://github.com/FlorentRevest/linux/commits/fprobe-min-args-3 > > And I got measurements again. Unfortunately it looks like this does not help :/ > > New benchmark results: https://paste.debian.net/1257856/ > New perf report: https://paste.debian.net/1257859/ > > The fprobe based approach is still significantly slower than the > direct call approach.
FYI, a new version was released, basing on ring-array, which brings a 6.96% increase in throughput of 1-thread case for ARM64.
https://lore.kernel.org/all/20221108071443.258794-1-wuqiang.matt@bytedance.com/
Could you share more details of the test ? I'll give it a try.
>> Anyway, eventually, I would like to remove the current kretprobe >> based implementation and unify fexit hook with function-graph >> tracer. It should make more better perfromance on it. > > That makes sense. :) How do you imagine the unified solution ? > Would both the fgraph and fprobe APIs keep existing but under the hood > one would be implemented on the other ? (or would one be gone ?) Would > we replace the rethook freelist with the function graph's per-task > shadow stacks ? (or the other way around ?))
How about a private pool designate for local cpu ? If the fprobed routine sticks to the same CPU when returning, the object allocation and reclaim can go a quick path, that should bring same performance as shadow stack. Otherwise the return of an object will go a slow path (slow as current freelist or objpool).
>>> Note that I can't really make sense of the perf report with indirect >>> calls. it always reports it spent 12% of the time in >>> rethook_trampoline_handler but I verified with both a WARN in that >>> function and a breakpoint with a debugger, this function does *not* >>> get called when running this "bench trig-fentry" benchmark. Also it >>> wouldn't make sense for fprobe_handler to call it so I'm quite >>> confused why perf would report this call and such a long time spent >>> there. Anyone know what I could be missing here ? > > I made slight progress on this. If I put the vmlinux file in the cwd > where I run perf report, the reports no longer contain references to > rethook_trampoline_handler. Instead, they have a few > 0xffff800008xxxxxx addresses under fprobe_handler. (like in the > pastebin I just linked) > > It's still pretty weird because that range is the vmalloc area on > arm64 and I don't understand why anything under fprobe_handler would > execute there. However, I'm also definitely sure that these 12% are > actually spent getting buffers from the rethook memory pool because if > I replace rethook_try_get and rethook_recycle calls with the usage of > a dummy static bss buffer (for the sake of benchmarking the > "theoretical best case scenario") these weird perf report traces are > gone and the 12% are saved. https://paste.debian.net/1257862/ > > This is why I would be interested in seeing rethook's memory pool > reimplemented on top of something like > https://lwn.net/Articles/788923/ If we get closer to the performance > of the the theoretical best case scenario where getting a blob of > memory is ~free (and I think it could be the case with a per task > shadow stack like fgraph's), then a bpf on fprobe implementation would > start to approach the performances of a direct called trampoline on > arm64: https://paste.debian.net/1257863/
| |