Messages in this thread | | | Date | Thu, 23 Jan 2003 17:46:41 -0800 | From | Dan Kegel <> | Subject | Re: debate on 700 threads vs asynchronous code |
| |
Lee Chin <leechin@mail.com> wrote: > Larry McVoy wrote: >> > Now, to cater to 700 clients, I can >> > a) launch 700 threads that each block on I/O to disk and to the client (in >> > reading and writing on the socket) >> > OR >> > b) Write an asycnhrounous system with only 2 or three threads where I manage the >> > connections and stack (via setcontext swapcontext etc), which is >> > programatically a little harder >> > Which way will yeild me better performance, considerng both approaches are >> > implemented optimally? >> >> If this is a serious question, an async system will by definition do better. >> You have either 700 stacks screwing up the data cache or 2-3 stacks nicely >> fitting in the data cache. Ditto for instruction cache, etc. > > Thanks for the rpely... my question was more so, with setcontext and swapcontext, I > will still be messing with the data cache right? > > In otherwords, as long as I have an async system with out setcontext, I know I am > good... but with it, havent I degraded to a threaded environment?
I suspect Linux's implementation of asynch I/O isn't able to handle sockets yet. Thus the choice is between nonblocking I/O and blocking I/O.
Nonblocking I/O is totally the way to go if you have full control over your source code and want the maximal performance in userspace. The best way to get good performance with nonblocking I/O in Linux is to use the sys_epoll system call; it's part of the 2.5 kernel, but a backport to 2.4 is available.
See http://www.kegel.com/c10k.html for an overview of the issue and some links. - Dan
-- Dan Kegel http://www.kegel.com http://counter.li.org/cgi-bin/runscript/display-person.cgi?user=78045
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |