Messages in this thread | | | From | Valentin Schneider <> | Subject | Re: [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU | Date | Thu, 14 Jan 2021 13:21:26 +0000 |
| |
On 14/01/21 14:12, Peter Zijlstra wrote: > On Wed, Jan 13, 2021 at 09:28:13PM +0800, Lai Jiangshan wrote: >> On Tue, Jan 12, 2021 at 10:51 PM Peter Zijlstra <peterz@infradead.org> wrote: >> > @@ -4972,9 +4977,11 @@ static void rebind_workers(struct worker >> > * of all workers first and then clear UNBOUND. As we're called >> > * from CPU_ONLINE, the following shouldn't fail. >> > */ >> > - for_each_pool_worker(worker, pool) >> > + for_each_pool_worker(worker, pool) { >> > WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, >> > pool->attrs->cpumask) < 0); >> > + kthread_set_per_cpu(worker->task, true); >> >> Will the schedule break affinity in the middle of these two lines due to >> patch4 allowing it and result in Paul's reported splat. > > So something like the below _should_ work, except i'm seeing odd WARNs. > I'll prod at it some more. > > --- a/kernel/workqueue.c > +++ b/kernel/workqueue.c > @@ -2371,6 +2371,7 @@ static int worker_thread(void *__worker) > /* tell the scheduler that this is a workqueue worker */ > set_pf_worker(true); > woke_up: > + kthread_parkme(); > raw_spin_lock_irq(&pool->lock); > > /* am I supposed to die? */ > @@ -2428,6 +2429,7 @@ static int worker_thread(void *__worker) > move_linked_works(work, &worker->scheduled, NULL); > process_scheduled_works(worker); > } > + kthread_parkme(); > } while (keep_working(pool)); > > worker_set_flags(worker, WORKER_PREP); > @@ -4978,9 +4980,9 @@ static void rebind_workers(struct worker > * from CPU_ONLINE, the following shouldn't fail. > */ > for_each_pool_worker(worker, pool) { > - WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, > - pool->attrs->cpumask) < 0); > + kthread_park(worker->task);
Don't we still need an affinity change here, to undo what was done in unbind_workers()?
Would something like
__kthread_bind_mask(worker->task, pool->attrs->cpumask, TASK_PARKED)
even work?
> kthread_set_per_cpu(worker->task, true); > + kthread_unpark(worker->task); > } > > raw_spin_lock_irq(&pool->lock);
| |