lkml.org 
[lkml]   [2008]   [May]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 1/10] Add generic helpers for arch IPI function calls
On Sat, May 03, 2008 at 07:49:30AM +0200, Nick Piggin wrote:
> On Fri, May 02, 2008 at 02:59:29PM +0200, Peter Zijlstra wrote:
> > On Fri, 2008-05-02 at 05:42 -0700, Paul E. McKenney wrote:
> >
> > > And here is one scenario that makes me doubt that my imagination is
> > > faulty:
> > >
> > > 1. CPU 0 disables irqs.
> > >
> > > 2. CPU 1 disables irqs.
> > >
> > > 3. CPU 0 invokes smp_call_function(). But CPU 1 will never respond
> > > because its irqs are disabled.
> > >
> > > 4. CPU 1 invokes smp_call_function(). But CPU 0 will never respond
> > > because its irqs are disabled.
> > >
> > > Looks like inherent deadlock to me, requiring that smp_call_function()
> > > be invoked with irqs enabled.
> > >
> > > So, what am I missing here?
> >
> > The wish to do it anyway ;-)
> >
> > I can imagine some situations where I'd like to try anyway and fall back
> > to a slower path when failing.
> >
> > With the initial design we would simply allocate data, stick it on the
> > queue and call the ipi (when needed).
> >
> > This is perfectly deadlock free when wait=0 and it just returns -ENOMEM
> > on allocation failure.
>
> Yeah, I'm just talking about the wait=0 case. (btw. I'd rather the core
> API takes some data rather than allocates some itself, eg because you
> might want to have it on the stack).

But taking data on the stack is safe only in the wait=1 case, right?

> For the wait=1 case, something very clever such as processing pending
> requests in a polling loop might be cool... however I'd rather not add
> such complexity until someone needs it (you could stick a comment in
> there outlining your algorithm). But I'd just rather not have peole rely
> on it yet.

In that case we may need to go back to the global lock with only one
request being processed at a time. Otherwise, if two wait=1 requests
happen at the same time, they deadlock waiting for each other to process
their request. (See Keith Owens: http://lkml.org/lkml/2008/5/2/183).

In other words, if you want to allow parallel calls to
smp_call_function(), the simplest way to do it seems to be to do the
polling loop. The other ways I have come up with thus far are uglier
and less effective (see http://lkml.org/lkml/2008/4/30/164).

Now, what I -could- do would be to prohibit the wait=1 case from
irq-disable state from polling -- that would make sense, as the caller
probably had a reason to mask irqs, and might not take kindly to having
them faked behind the caller's back. ;-)

> > It it doesn't return -ENOMEM I know its been queued and will be
> > processed at some point, if it does fail, I can deal with it in another
> > way.
>
> At least with IPIs I think we can guarantee they will be processed on
> the target after we queue them.

OK, so let me make sure I understand what is needed. One example might be
some code called from scheduler_tick(), which runs with irqs disabled.
Without the ability to call smp_call_function() directly, you have
to fire off a work queue or something. Now, if smp_call_function()
can hand you an -ENOMEM or (maybe) an -EBUSY, then you still have to
fire off the work queue, but you probably only have to do it rarely,
minimizing the performance impact.

Another possibility is when it is -nice- to call smp_call_function(),
but can just try again on the next scheduler_tick() -- ignoring dynticks
idle for the moment. In this case, you might still test the error return
to set a flag that you will check on the next scheduler_tick() call.

Is this where you guys are coming from?

And you are all OK with smp_call_function() called with irqs enabled
never being able to fail, right? (Speaking of spaghetti code, why
foist unnecessary failure checks on the caller...)

> > I know I'd like to do that and I suspect Nick has a few use cases up his
> > sleeve as well.
>
> It would be handy. The "quickly kick something off on another CPU" is
> pretty nice in mm/ when you have per-cpu queues or caches that might
> want to be flushed.

OK, I think I might be seeing what you guys are getting at. Here is
what I believe you guys need:

1. No deadlocks, ever, not even theoretical "low probability"
deadlocks.

2. No failure returns when called with irqs enabled. On the other
hand, when irqs are disabled, failure is possible. Though hopefully
unlikely.

3. Parallel execution of multiple smp_call_function() requests
is required, even when called with irqs disabled.

4. The wait=1 case with irqs disabled is prohibited.

5. If you call smp_call_function() with irqs disabled, then you
are guaranteed that no other CPU's smp_call_function() handler
will be invoked while smp_call_function() is executing.

Anything I am missing?

Thanx, Paul


\
 
 \ /
  Last update: 2008-05-03 20:15    [W:0.092 / U:0.408 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site