lkml.org 
[lkml]   [2013]   [Feb]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 08/13] workqueue: add lock_pool_executing_work()
    Hello,

    On Fri, Feb 01, 2013 at 02:41:31AM +0800, Lai Jiangshan wrote:
    > +static struct worker_pool *lock_pool_executing_work(struct work_struct *work,
    > + struct worker **worker)
    > +{
    > + unsigned long pool_id = offq_work_pool_id(work);
    > + struct worker_pool *pool;
    > + struct worker *exec;
    > +
    > + if (pool_id == WORK_OFFQ_POOL_NONE)
    > + return NULL;
    > +
    > + pool = worker_pool_by_id(pool_id);
    > + if (!pool)
    > + return NULL;
    > +
    > + spin_lock(&pool->lock);
    > + exec = find_worker_executing_work(pool, work);
    > + if (exec) {
    > + BUG_ON(pool != exec->pool);
    > + *worker = exec;
    > + return pool;
    > + }
    > + spin_unlock(&pool->lock);
    > +
    > + return NULL;
    > +}

    So, if a work item is queued on the same CPU and it isn't being
    executed, it will lock, look up the hash, unlock and then lock again?
    If this is something improved by later patch, please explain so.
    There gotta be a better way to do this, right?

    Thanks.

    --
    tejun


    \
     
     \ /
      Last update: 2013-02-04 23:24    [W:4.172 / U:0.036 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site