lkml.org 
[lkml]   [2015]   [Oct]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC v2 07/18] kthread: Allow to cancel kthread work
On Mon 2015-10-05 12:07:58, Petr Mladek wrote:
> On Fri 2015-10-02 15:24:53, Tejun Heo wrote:
> > Hello,
> >
> > On Fri, Oct 02, 2015 at 05:43:36PM +0200, Petr Mladek wrote:
> > > IMHO, we need both locks. The worker manipulates more works and
> > > need its own lock. We need work-specific lock because the work
> > > might be assigned to different workers and we need to be sure
> > > that the operations are really serialized, e.g. queuing.
> >
> > I don't think we need per-work lock. Do we have such usage in kernel
> > at all? If you're worried, let the first queueing record the worker
> > and trigger warning if someone tries to queue it anywhere else. This
> > doesn't need to be full-on general like workqueue. Let's make
> > reasonable trade-offs where possible.
>
> I actually thought about this simplification as well. But then I am
> in doubts about the API. It would make sense to assign the worker
> when the work is being initialized and avoid the duplicate information
> when the work is being queued:
>
> init_kthread_work(work, fn, worker);
> queue_work(work);
>
> Or would you prefer to keep the API similar to workqueues even when
> it makes less sense here?
>
>
> In each case, we need a way to switch the worker if the old one
> is destroyed and a new one is started later. We would need
> something like:
>
> reset_work(work, worker)
> or
> reinit_work(work, fn, worker)

I was too fast. We could set "work->worker = NULL" when the work
finishes and it is not pending. It means that it will be connected
to the particular worker only when used. Then we could keep the
workqueues-like API and do not need reset_work().

I am going to play with this. I feel that it might work.

Best Regards,
Petr


\
 
 \ /
  Last update: 2015-10-05 13:21    [W:0.095 / U:1.616 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site