lkml.org 
[lkml]   [2015]   [Jun]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: call_rcu from trace_preempt
On 6/17/15 2:36 PM, Paul E. McKenney wrote:
> Well, you do need to have something in each element to allow them to be
> tracked. You could indeed use llist_add() to maintain the per-CPU list,
> and then use llist_del_all() bulk-remove all the elements from the per-CPU
> list. You can then pass each element in turn to kfree_rcu(). And yes,
> I am suggesting that you open-code this, as it is going to be easier to
> handle your special case then to provide a fully general solution. For
> one thing, the general solution would require a full rcu_head to track
> offset and next. In contrast, you can special-case the offset. And
> ignore the overload special cases.

yes. all makes sense.

> Locklessly enqueue onto a per-CPU list, but yes. The freeing is up to

yes. per-cpu llist indeed.

> you -- you get called just before exit from __call_rcu(), and get to
> figure out what to do.
>
> My guess would be if not in interrupt and not recursively invoked,
> atomically remove all the elements from the list, then pass each to
> kfree_rcu(), and finally let things take their course from there.
> The llist APIs look like they would work.

Above and 'just before the exit from __call_rcu()' part of suggestion
I still don't understand.
To avoid reentry into call_rcu I can either create 1 or N new kthreads
or work_queue and do manual wakeups, but that's very specialized and I
don't want to permanently waste them, so I'm thinking to llist_add into
per-cpu llists and do llist_del_all in rcu_process_callbacks() to take
them from these llists and call kfree_rcu on them.
The llist_add part will also do:
if (!rcu_is_watching()) invoke_rcu_core();
to raise softirq when necessary.
So at the end it will look like two phase kfree_rcu.
I'll try to code it up and see it explodes :)



\
 
 \ /
  Last update: 2015-06-18 02:21    [W:0.078 / U:0.092 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site