lkml.org 
[lkml]   [2020]   [Feb]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [RFC patch 14/19] bpf: Use migrate_disable() in hashtab code
Date
Mathieu Desnoyers <mathieu.desnoyers@efficios.com> writes:
> On 14-Feb-2020 02:39:31 PM, Thomas Gleixner wrote:
>> Replace the preempt_disable/enable() pairs with migrate_disable/enable()
>> pairs to prepare BPF to work on PREEMPT_RT enabled kernels. On a non-RT
>> kernel this maps to preempt_disable/enable(), i.e. no functional change.

...

> Having all those events randomly and silently discarded might be quite
> unexpected from a user standpoint. This turns the deadlock prevention
> mechanism into a random tracepoint-dropping facility, which is
> unsettling.

Well, it randomly drops events which might be unrelated to the syscall
target today already, this will just drop some more. Shrug.

> One alternative approach we could consider to solve this is to make
> this deadlock prevention nesting counter per-thread rather than
> per-cpu.

That should work both on !RT and RT.

> Also, I don't think using __this_cpu_inc() without preempt-disable or
> irq off is safe. You'll probably want to move to this_cpu_inc/dec
> instead, which can be heavier on some architectures.

Good catch.

Thanks,

tglx



\
 
 \ /
  Last update: 2020-02-14 20:56    [W:0.096 / U:0.148 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site