lkml.org 
[lkml]   [2007]   [Jan]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0 of 4] Generic AIO by scheduling stacks
Linus Torvalds wrote:
>
> On Wed, 31 Jan 2007, Benjamin Herrenschmidt wrote:
>
>
>>>- We would now have some measure of task_struct concurrency. Read that twice,
>>>it's scary. As two fibrils execute and block in turn they'll each be
>>>referencing current->. It means that we need to audit task_struct to make sure
>>>that paths can handle racing as its scheduled away. The current implementation
>>>*does not* let preemption trigger a fibril switch. So one only has to worry
>>>about racing with voluntary scheduling of the fibril paths. This can mean
>>>moving some task_struct members under an accessor that hides them in a struct
>>>in task_struct so they're switched along with the fibril. I think this is a
>>>manageable burden.
>>
>>That's the one scaring me in fact ... Maybe it will end up being an easy
>>one but I don't feel too comfortable...
>
>
> We actually have almost zero "interesting" data in the task-struct.
>
> All the real meat of a task has long since been split up into structures
> that can be shared for threading anyway (ie signal/files/mm/etc).
>
> Which is why I'm personally very comfy with just re-using task_struct
> as-is.
>
> NOTE! This is with the understanding that we *never* do any preemption.
> The whole point of the microthreading as far as I'm concerned is exactly
> that it is cooperative. It's not preemptive, and it's emphatically *not*
> concurrent (ie you'd never have two fibrils running at the same time on
> separate CPU's).

So using stacks to hold state is (IMO) the logical choice to do async
syscalls, especially once you have a look at some of the other AIO
stuff going around.

I always thought that the AIO people didn't do this because they wanted
to avoid context switch overhead.

So now if we introduce the context switch overhead back, why do we need
just another scheduling primitive? What's so bad about using threads? The
upside is that almost everything is already there and working, and also
they don't have any of these preemption or concurrency restrictions.

The only thing I saw in Zach's post against the use of threads is that
some kernel API would change. But surely if this is the showstopper then
there must be some better argument than sys_getpid()?!

Aside from that, I'm glad that someone is looking at this way for AIO,
because I really don't like some aspects in the other approach.

--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2007-01-31 06:41    [W:0.143 / U:0.012 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site