lkml.org 
[lkml]   [2013]   [Jan]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 05/13] workqueue: change queued detection and remove *mb()s
Date
Now, we has this invariant:
CWQ bit is set and the cwq->pool == pool
<==> the work is queued on the pool

So we can simplify the work-queued-detection-and-lock and remove *mb()s.

(Although rmb()/wmb() is nop in x86, but it is very slow in other arch.)

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
kernel/workqueue.c | 43 +++++++++++++++++++------------------------
1 files changed, 19 insertions(+), 24 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 50d3dd5..b7cfaa1 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1067,6 +1067,7 @@ static int try_to_grab_pending(struct work_struct *work, bool is_dwork,
unsigned long *flags)
{
struct worker_pool *pool;
+ struct cpu_workqueue_struct *cwq;

local_irq_save(*flags);

@@ -1096,14 +1097,20 @@ static int try_to_grab_pending(struct work_struct *work, bool is_dwork,
goto fail;

spin_lock(&pool->lock);
- if (!list_empty(&work->entry)) {
- /*
- * This work is queued, but perhaps we locked the wrong
- * pool. In that case we must see the new value after
- * rmb(), see insert_work()->wmb().
- */
- smp_rmb();
- if (pool == get_work_pool(work)) {
+ /*
+ * The CWQ bit is set/cleared only when we do enqueue/dequeue the work
+ * When a work is enqueued(insert_work()) to a pool:
+ * we set cwq(CWQ bit) with pool->lock held
+ * when a work is dequeued(process_one_work(),try_to_grab_pending()):
+ * we clear cwq(CWQ bit) with pool->lock held
+ *
+ * So when if the pool->lock is held, we can determine:
+ * CWQ bit is set and the cwq->pool == pool
+ * <==> the work is queued on the pool
+ */
+ cwq = get_work_cwq(work);
+ if (cwq) {
+ if (pool == cwq->pool) {
debug_work_deactivate(work);

/*
@@ -1156,13 +1163,6 @@ static void insert_work(struct cpu_workqueue_struct *cwq,

/* we own @work, set data and link */
set_work_cwq(work, cwq, extra_flags);
-
- /*
- * Ensure that we get the right work->data if we see the
- * result of list_add() below, see try_to_grab_pending().
- */
- smp_wmb();
-
list_add_tail(&work->entry, head);

/*
@@ -2796,15 +2796,10 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr)
return false;

spin_lock_irq(&pool->lock);
- if (!list_empty(&work->entry)) {
- /*
- * See the comment near try_to_grab_pending()->smp_rmb().
- * If it was re-queued to a different pool under us, we
- * are not going to wait.
- */
- smp_rmb();
- cwq = get_work_cwq(work);
- if (unlikely(!cwq || pool != cwq->pool))
+ /* See the comment near try_to_grab_pending() with the same code */
+ cwq = get_work_cwq(work);
+ if (cwq) {
+ if (unlikely(pool != cwq->pool))
goto already_gone;
} else {
worker = find_worker_executing_work(pool, work);
--
1.7.7.6


\
 
 \ /
  Last update: 2013-01-31 20:44    [W:0.208 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site