Messages in this thread | | | From | Dexuan Cui <> | Subject | RE: [PATCH -tip V2 00/10] workqueue: break affinity initiatively | Date | Wed, 23 Dec 2020 20:27:18 +0000 |
| |
> From: Lai Jiangshan <jiangshanlai@gmail.com> > Sent: Wednesday, December 23, 2020 7:02 AM > > > > Hi, > > I tested this patchset on today's tip.git's master branch > > (981316394e35 ("Merge branch 'locking/urgent'")). > > > > Every time the kernel boots with 32 CPUs (I'm running the Linux VM on > > Hyper-V), I get the below warning. > > (BTW, with 8 or 16 CPUs, I don't see the warning). > > By printing the cpumasks with "%*pbl", I know the warning happens > > because: > > new_mask = 16-31 > > cpu_online_mask= 0-16 > > cpu_active_mask= 0-15 > > p->nr_cpus_allowed=16 > > > > Hello, Dexuan > > Could you omit patch4 of the patchset and test it again, please? > ("workqueue: don't set the worker's cpumask when kthread_bind_mask()") > > kthread_bind_mask() set the worker task to the pool's cpumask without > any check. And set_cpus_allowed_ptr() finds that the task's cpumask > is unchanged (already set by kthread_bind_mask()) and skips all the checks. > > And I found that numa=fake=2U seems broken on cpumask_of_node() in my > box. > > Thanks, > Lai
Looks like your analysis is correct: the warning can't repro if I configure all the 32 vCPUs into 1 virtual NUMA node (and I don't see the message "smpboot: CPU 16 Converting physical 0 to logical die 1"):
[ 1.495440] smp: Bringing up secondary CPUs ... [ 1.499207] x86: Booting SMP configuration: [ 1.503038] .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 #16 #17 #18 #19 #20 #21 #22 #23 #24 #25 #26 #27 #28 #29 #30 #31 [ 1.531930] smp: Brought up 1 node, 32 CPUs [ 1.538779] smpboot: Max logical packages: 1 [ 1.539041] smpboot: Total of 32 processors activated (146859.90 BogoMIPS)
The warning only repros if there are more than 1 node, and it only prints once for the first vCPU of the second node (i.e. node #1).
With more than 1 node, if I don't use patch4, the warning does not repro.
Thanks, -- Dexuan
| |