lkml.org 
[lkml]   [2020]   [May]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 09/24] rcu/tree: cache specified number of objects
Hi Paul,

On Mon, May 4, 2020 at 3:01 PM Paul E. McKenney <paulmck@kernel.org> wrote:
>
> On Mon, May 04, 2020 at 02:08:05PM -0400, Joel Fernandes wrote:
> > On Mon, May 04, 2020 at 07:48:22PM +0200, Uladzislau Rezki wrote:
> > > On Mon, May 04, 2020 at 08:24:37AM -0700, Paul E. McKenney wrote:
> > [..]
> > > > > > Presumably the list can also be accessed without holding this lock,
> > > > > > because otherwise we shouldn't need llist...
> > > > > >
> > > > > Hm... We increase the number of elements in cache, therefore it is not
> > > > > lockless. From the other hand i used llist_head to maintain the cache
> > > > > because it is single linked list, we do not need "*prev" link. Also
> > > > > we do not need to init the list.
> > > > >
> > > > > But i can change it to list_head. Please let me know if i need :)
> > > >
> > > > Hmmm... Maybe it is time for a non-atomic singly linked list? In the RCU
> > > > callback processing, the operations were open-coded, but they have been
> > > > pushed into include/linux/rcu_segcblist.h and kernel/rcu/rcu_segcblist.*.
> > > >
> > > > Maybe some non-atomic/protected/whatever macros in the llist.h file?
> > > > Or maybe just open-code the singly linked list? (Probably not the
> > > > best choice, though.) Add comments stating that the atomic properties
> > > > of the llist functions aren't neded? Something else?
> > > >
> > > In order to keep it simple i can replace llist_head by the list_head?
> >
> > Just to clarify for me, what is the disadvantage of using llist here?
>
> Are there some llist APIs that are not set up for concurrency? I am
> not seeing any.

llist deletion racing with another llist deletion will need locking.
So strictly speaking, some locking is possible with llist usage?

The locklessness as I understand comes when adding and deleting at the
same time. For that no lock is needed. But in the current patch, it
locks anyway to avoid the lost-update of the size of the list.

> The overhead isn't that much of a concern, given that these are not on the
> hotpath, but people reading the code and seeing the cmpxchg operations
> might be forgiven for believing that there is some concurrency involved
> somewhere.
>
> Or am I confused and there are now single-threaded add/delete operations
> for llist?

I do see some examples of llist usage with locking in the kernel code.
One case is: do_init_module() calling llist_add to add to the
init_free_list under module_mutex.

> > Since we don't care about traversing backwards, isn't it better to use llist
> > for this usecase?
> >
> > I think Vlad is using locking as we're also tracking the size of the llist to
> > know when to free pages. This tracking could suffer from the lost-update
> > problem without any locking, 2 lockless llist_add happened simulatenously.
> >
> > Also if list_head is used, it will take more space and still use locking.
>
> Indeed, it would be best to use a non-concurrent singly linked list.

Ok cool :-)

Is it safe to say something like the following is ruled out? ;-) :-D
#define kfree_rcu_list_add llist_add

Thanks,

- Joel

\
 
 \ /
  Last update: 2020-05-04 21:39    [W:0.622 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site