Messages in this thread | | | Date | Mon, 15 Oct 2018 09:36:06 -0700 | From | "Paul E. McKenney" <> | Subject | Re: [PATCH] rcu: Use cpus_read_lock() while looking at cpu_online_mask |
| |
On Mon, Oct 15, 2018 at 11:33:48PM +0800, Boqun Feng wrote: > On Mon, Oct 15, 2018 at 05:09:03PM +0200, Sebastian Andrzej Siewior wrote: > > On 2018-10-15 23:07:15 [+0800], Boqun Feng wrote: > > > Hi, Sebastian > > Hi Boqun, > > > > > On Mon, Oct 15, 2018 at 04:42:17PM +0200, Sebastian Andrzej Siewior wrote: > > > > On 2018-10-13 06:48:13 [-0700], Paul E. McKenney wrote: > > > > > > > > > > My concern would be that it would queue it by default for the current > > > > > CPU, which would serialize the processing, losing the concurrency of > > > > > grace-period initialization. But that was a long time ago, and perhaps > > > > > workqueues have changed. > > > > > > > > but the code here is always using the first CPU of a NUMA node or did I > > > > miss something? > > > > > > > > > > The thing is the original way is to pick one CPU for a *RCU* node to > > > run the grace-period work, but with your proposal, if a RCU node is > > > smaller than a NUMA node (having fewer CPUs), we could end up having two > > > grace-period works running on one CPU. I think that's Paul's concern. > > > > Ah. Okay. From what I observed, the RCU nodes and NUMA nodes were 1:1 > > here. Noted. > > Ok, in that case, there should be no significant performance difference. > > > Given that I can enqueue a work item on an offlined CPU I don't see why > > commit fcc6354365015 ("rcu: Make expedited GPs handle CPU 0 being > > offline") should make a difference. Any objections to just revert it? > > Well, that commit is trying to avoid queue a work on an offlined CPU, > because according to workqueue API, it's the users' responsibility to > make sure the CPU is online when a work item enqueued. So there is a > difference ;-) > > But I don't have any objection to revert it with your proposal, since > yours is more simple and straight-forward, and doesn't perform worse if > NUMA nodes and RCU nodes have one-to-one corresponding. > > Besides, I think even if we observe some performance difference in the > future, the best way to solve that is to make workqueue have a more > fine-grained affine group than a NUMA node.
Please keep in mind that there are computer systems out there with NUMA topologies that are completely incompatible with RCU's rcu_node tree structure. According to Rik van Riel (CCed), there are even systems out there where CPU 0 is on socket 0, CPU 1 on socket 1, and so on, round-robining across the sockets.
The system that convinced me that the additional constraints on the workqueue's CPU had CPUs 0-7 on one socket and CPUs 8-15 on the second, and with CPUs 0-15 sharing the same leaf rcu_node structure. Unfortunately, I no longer have useful access to this system (dead disk drive, apparently).
I am not saying that Sebastian's approach is bad, rather that it does need to be tested on a variety of systems.
Thanx, Paul
> Regards, > Boqun > > > > > > Regards, > > > Boqun > > > > Sebastian
| |