lkml.org 
[lkml]   [2009]   [Jan]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/3] work_on_cpu: Use our own workqueue.
On Fri, 30 Jan 2009 16:33:53 +1030 Rusty Russell <rusty@rustcorp.com.au> wrote:

> On Thursday 29 January 2009 12:42:05 Andrew Morton wrote:
> > On Thu, 29 Jan 2009 12:13:32 +1030 Rusty Russell <rusty@rustcorp.com.au> wrote:
> >
> > > On Thursday 29 January 2009 06:14:40 Andrew Morton wrote:
> > > > It's vulnerable to the same deadlock, I think? Suppose we have:
> > > ...
> > > > - A calls work_on_cpu() and takes woc_mutex.
> > > >
> > > > - Before function_which_takes_L() has started to execute, task B takes L
> > > > then calls work_on_cpu() and task B blocks on woc_mutex.
> > > >
> > > > - Now function_which_takes_L() runs, and blocks on L
> > >
> > > Agreed, but now it's a fairly simple case. Both sides have to take lock L, and both have to call work_on_cpu.
> > >
> > > Workqueues are more generic and widespread, and an amazing amount of stuff gets called from them. That's why I felt uncomfortable with removing the one known problematic caller.
> > >
> >
> > hm. it's a bit of a timebomb.
> >
> > y'know, the original way in which acpi-cpufreq did this is starting to
> > look attractive. Migrate self to that CPU then just call the dang
> > function. Slow, but no deadlocks (I think)?
>
> Just buggy. What random thread was it mugging? If there's any path where
> it's not a kthread, what if userspace does the same thing at the same time?
> We risk running on the wrong cpu, *then* overriding userspace when we restore
> it.

hm, Ok, not unficable but not pleasant.

> In general these cpumask games are a bad idea.

So we still don't have any non-buggy proposal.


\
 
 \ /
  Last update: 2009-01-30 07:33    [W:0.251 / U:0.224 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site