lkml.org 
[lkml]   [2020]   [May]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 09/24] rcu/tree: cache specified number of objects
On Mon, May 04, 2020 at 02:08:05PM -0400, Joel Fernandes wrote:
> On Mon, May 04, 2020 at 07:48:22PM +0200, Uladzislau Rezki wrote:
> > On Mon, May 04, 2020 at 08:24:37AM -0700, Paul E. McKenney wrote:
> [..]
> > > > > Presumably the list can also be accessed without holding this lock,
> > > > > because otherwise we shouldn't need llist...
> > > > >
> > > > Hm... We increase the number of elements in cache, therefore it is not
> > > > lockless. From the other hand i used llist_head to maintain the cache
> > > > because it is single linked list, we do not need "*prev" link. Also
> > > > we do not need to init the list.
> > > >
> > > > But i can change it to list_head. Please let me know if i need :)
> > >
> > > Hmmm... Maybe it is time for a non-atomic singly linked list? In the RCU
> > > callback processing, the operations were open-coded, but they have been
> > > pushed into include/linux/rcu_segcblist.h and kernel/rcu/rcu_segcblist.*.
> > >
> > > Maybe some non-atomic/protected/whatever macros in the llist.h file?
> > > Or maybe just open-code the singly linked list? (Probably not the
> > > best choice, though.) Add comments stating that the atomic properties
> > > of the llist functions aren't neded? Something else?
> > >
> > In order to keep it simple i can replace llist_head by the list_head?
>
> Just to clarify for me, what is the disadvantage of using llist here?

Are there some llist APIs that are not set up for concurrency? I am
not seeing any.

The overhead isn't that much of a concern, given that these are not on the
hotpath, but people reading the code and seeing the cmpxchg operations
might be forgiven for believing that there is some concurrency involved
somewhere.

Or am I confused and there are now single-threaded add/delete operations
for llist?

> Since we don't care about traversing backwards, isn't it better to use llist
> for this usecase?
>
> I think Vlad is using locking as we're also tracking the size of the llist to
> know when to free pages. This tracking could suffer from the lost-update
> problem without any locking, 2 lockless llist_add happened simulatenously.
>
> Also if list_head is used, it will take more space and still use locking.

Indeed, it would be best to use a non-concurrent singly linked list.

Thanx, Paul

> Thoughts?
>
> thanks,
>
> - Joel
>
> > >
> > > The comments would be a good start. Just to take pity on people seeing
> > > the potential for concurrency and wondering how the concurrent accesses
> > > actually happen. ;-)
> > >
> > Sounds like you are kidding me :)
> >
> > --
> > Vlad Rezki

\
 
 \ /
  Last update: 2020-05-04 21:03    [W:0.106 / U:0.120 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site