lkml.org 
[lkml]   [2012]   [Aug]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH v3 3/6] workqueue: change value of lcpu in __queue_delayed_work_on()
Date
We assign cpu id into work struct's data field in __queue_delayed_work_on().
In current implementation, when work is come in first time,
current running cpu id is assigned.
If we do __queue_delayed_work_on() with CPU A on CPU B,
__queue_work() invoked in delayed_work_timer_fn() go into
the following sub-optimal path in case of WQ_NON_REENTRANT.

gcwq = get_gcwq(cpu);
if (wq->flags & WQ_NON_REENTRANT &&
(last_gcwq = get_work_gcwq(work)) && last_gcwq != gcwq) {

Change lcpu to @cpu and rechange lcpu to local cpu if lcpu is WORK_CPU_UNBOUND.
It is sufficient to prevent to go into sub-optimal path.

Signed-off-by: Joonsoo Kim <js1304@gmail.com>

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index c29f2dc..32c4f79 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1356,9 +1356,16 @@ static void __queue_delayed_work(int cpu, struct workqueue_struct *wq,
if (!(wq->flags & WQ_UNBOUND)) {
struct global_cwq *gcwq = get_work_gcwq(work);

- if (gcwq && gcwq->cpu != WORK_CPU_UNBOUND)
+ /*
+ * If we cannot get gcwq from work directly, we should
+ * deliberately select last cpu not to go into sub-optimal
+ * path of reentrance detection for delayed work. In this case,
+ * we assign requested cpu to lcpu except WORK_CPU_UNBOUND
+ */
+ lcpu = cpu;
+ if (gcwq)
lcpu = gcwq->cpu;
- else
+ if (lcpu == WORK_CPU_UNBOUND)
lcpu = raw_smp_processor_id();
} else {
lcpu = WORK_CPU_UNBOUND;
--
1.7.9.5


\
 
 \ /
  Last update: 2012-08-15 17:03    [W:0.089 / U:0.940 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site