lkml.org 
[lkml]   [2008]   [Jun]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH 1/2] workqueues: implement flush_work()
(on top of [PATCH] workqueues: insert_work: use "list_head *" instead of "int tail"
http://marc.info/?l=linux-kernel&m=121328944230175)

Most of users of flush_workqueue() can be changed to use cancel_work_sync(),
but sometimes we really need to wait for the completion and cancelling is not
an option. schedule_on_each_cpu() is good example.

Add the new helper, flush_work(work), which waits for the completion of the
specific work_struct.

By its nature it requires that this work must not be re-queued, and thus its
usage is limited. For example, this code

queue_work(wq, work);
/* WINDOW */
queue_work(wq, work);

flush_work(work);

is not right. What can happen in the WINDOW above is

- wq starts the execution of work->func()

- the caller migrates to another CPU

now, after the 2nd queue_work() this work is active on the previous CPU, and
at the same time it is queued on another. We can fix this limitation, but in
that case we have to iterate over all CPUs like wait_on_work() does, this
will depreciate the advantages of this helper.

Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>

--- 26-rc2/include/linux/workqueue.h~WQ_2_FLUSH_WORK 2008-05-18 15:42:34.000000000 +0400
+++ 26-rc2/include/linux/workqueue.h 2008-05-18 15:42:34.000000000 +0400
@@ -198,6 +198,8 @@ extern int keventd_up(void);
extern void init_workqueues(void);
int execute_in_process_context(work_func_t fn, struct execute_work *);

+extern int flush_work(struct work_struct *work);
+
extern int cancel_work_sync(struct work_struct *work);

/*
--- 26-rc2/kernel/workqueue.c~WQ_2_FLUSH_WORK 2008-06-12 21:28:13.000000000 +0400
+++ 26-rc2/kernel/workqueue.c 2008-06-13 17:31:54.000000000 +0400
@@ -399,6 +399,52 @@ void flush_workqueue(struct workqueue_st
}
EXPORT_SYMBOL_GPL(flush_workqueue);

+/**
+ * flush_work - block until a work_struct's callback has terminated
+ * @work: the work which is to be flushed
+ *
+ * It is expected that, prior to calling flush_work(), the caller has
+ * arranged for the work to not be requeued, otherwise it doesn't make
+ * sense to use this function.
+ */
+int flush_work(struct work_struct *work)
+{
+ struct cpu_workqueue_struct *cwq;
+ struct list_head *prev;
+ struct wq_barrier barr;
+
+ might_sleep();
+ cwq = get_wq_data(work);
+ if (!cwq)
+ return 0;
+
+ prev = NULL;
+ spin_lock_irq(&cwq->lock);
+ if (unlikely(cwq->current_work == work)) {
+ prev = &cwq->worklist;
+ } else {
+ if (list_empty(&work->entry))
+ goto out;
+ /*
+ * See the comment near try_to_grab_pending()->smp_rmb().
+ * If it was re-queued under us we are not going to wait.
+ */
+ smp_rmb();
+ if (cwq != get_wq_data(work))
+ goto out;
+ prev = &work->entry;
+ }
+ insert_wq_barrier(cwq, &barr, prev->next);
+out:
+ spin_unlock_irq(&cwq->lock);
+ if (!prev)
+ return 0;
+
+ wait_for_completion(&barr.done);
+ return 1;
+}
+EXPORT_SYMBOL_GPL(flush_work);
+
/*
* Upon a successful return (>= 0), the caller "owns" WORK_STRUCT_PENDING bit,
* so this work can't be re-armed in any way.


\
 
 \ /
  Last update: 2008-06-13 16:29    [W:0.068 / U:2.340 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site