Messages in this thread | | | Subject | Re: [PATCH bpf] bpf: Don't WARN_ON_ONCE in bpf_bprintf_prepare | From | Daniel Borkmann <> | Date | Wed, 5 May 2021 22:00:35 +0200 |
| |
On 5/5/21 8:55 PM, Andrii Nakryiko wrote: > On Wed, May 5, 2021 at 9:23 AM Florent Revest <revest@chromium.org> wrote: >> >> The bpf_seq_printf, bpf_trace_printk and bpf_snprintf helpers share one >> per-cpu buffer that they use to store temporary data (arguments to >> bprintf). They "get" that buffer with try_get_fmt_tmp_buf and "put" it >> by the end of their scope with bpf_bprintf_cleanup. >> >> If one of these helpers gets called within the scope of one of these >> helpers, for example: a first bpf program gets called, uses > > Can we afford having few struct bpf_printf_bufs? They are just 512 > bytes, so can we have 3-5 of them? Tracing low-level stuff isn't the > only situation where this can occur, right? If someone is doing > bpf_snprintf() and interrupt occurs and we run another BPF program, it > will be impossible to do bpf_snprintf() or bpf_trace_printk() from the > second BPF program, etc. We can't eliminate the probability, but > having a small stack of buffers would make the probability so > miniscule as to not worry about it at all. > > Good thing is that try_get_fmt_tmp_buf() abstracts all the details, so > the changes are minimal. Nestedness property is preserved for > non-sleepable BPF programs, right? If we want this to work for > sleepable we'd need to either: 1) disable migration or 2) instead of > assuming a stack of buffers, do a loop to find unused one. Should be > acceptable performance-wise, as it's not the fastest code anyway > (printf'ing in general). > > In any case, re-using the same buffer for sort-of-optional-to-work > bpf_trace_printk() and probably-important-to-work bpf_snprintf() is > suboptimal, so seems worth fixing this. > > Thoughts?
Yes, agree, it would otherwise be really hard to debug. I had the same thought on why not allowing nesting here given users very likely expect these helpers to just work for all the contexts.
Thanks, Daniel
| |