Messages in this thread Patch in this message | | | Subject | Re: [RFC] workqueue: avoiding unbounded wq on isolated CPUs by default | From | Mike Galbraith <> | Date | Tue, 21 Jul 2015 10:55:25 +0200 |
| |
On Sun, 2015-07-19 at 10:02 +0200, Mike Galbraith wrote:
> Why do we do nothing about these allegedly unbound work items?
My box seems to think the answer is: no reason other than nobody having asked the source to please not do that. Guess I'll go ask a NUMA box.
workqueue: RR schedule unbound work to CPUs in wq_unbound_cpumask
WORK_CPU_UNBOUND work items queued to a bound workqueue always run locally. This is a good thing normally, as it keeps us from bouncing work all over the place like ping-pong balls in a nuclear fission demo. When the user has asked us to keep unbound work away from certain CPUs however, honor that request, select a CPU from wq_unbound_cpumask.
Not-signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com> --- kernel/workqueue.c | 27 +++++++++++++++++++++++++-- 1 file changed, 25 insertions(+), 2 deletions(-)
--- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -301,6 +301,9 @@ static bool workqueue_freezing; /* PL: static cpumask_var_t wq_unbound_cpumask; /* PL: low level cpumask for all unbound wqs */ +/* CPU where WORK_CPU_UNBOUND work was last round robin scheduled from this CPU */ +static DEFINE_PER_CPU(unsigned int, wq_unbound_rr_cpu_last); + /* the per-cpu worker pools */ static DEFINE_PER_CPU_SHARED_ALIGNED(struct worker_pool [NR_STD_WORKER_POOLS], cpu_worker_pools); @@ -1294,6 +1297,24 @@ static bool is_chained_work(struct workq return worker && worker->current_pwq->wq == wq; } +/* + * When queueing WORK_CPU_UNBOUND work to a !WQ_UNBOUND queue, round + * robin among wq_unbound_cpumask to avoid perturbing sensitive tasks. + */ +static unsigned int select_round_robin_cpu(unsigned int cpu) +{ + if (cpumask_test_cpu(cpu, wq_unbound_cpumask)) + return cpu; + if (cpumask_empty(wq_unbound_cpumask)) + return cpu; + cpu = __this_cpu_read(wq_unbound_rr_cpu_last); + cpu = cpumask_next_and(cpu, wq_unbound_cpumask, cpu_online_mask); + if (cpu >= nr_cpu_ids) + cpu = 0; + __this_cpu_write(wq_unbound_rr_cpu_last, cpu); + return cpu; +} + static void __queue_work(int cpu, struct workqueue_struct *wq, struct work_struct *work) { @@ -1322,9 +1343,11 @@ static void __queue_work(int cpu, struct cpu = raw_smp_processor_id(); /* pwq which will be used unless @work is executing elsewhere */ - if (!(wq->flags & WQ_UNBOUND)) + if (!(wq->flags & WQ_UNBOUND)) { + if (req_cpu == WORK_CPU_UNBOUND) + cpu = select_round_robin_cpu(cpu); pwq = per_cpu_ptr(wq->cpu_pwqs, cpu); - else + } else pwq = unbound_pwq_by_node(wq, cpu_to_node(cpu)); /*
| |