Messages in this thread | | | Date | Fri, 14 Nov 2008 08:14:22 -0500 | From | "J.R. Mauro" <> | Subject | Re: Unix sockets via TCP on localhost: is TCP slower? |
| |
On Fri, Nov 14, 2008 at 4:06 AM, Olaf van der Spek <olafvdspek@gmail.com> wrote: > On Fri, Nov 14, 2008 at 9:54 AM, Eric Dumazet <dada1@cosmosbay.com> wrote: >>> I expected the kernel to copy data directly from user-space of the >>> sending process to a kernel buffer of the receiving process, much like >>> UNIX sockets. >>> >> >> localhost uses a standard network device, and whole network stack >> is used, no 'special kludges'. You can add iptables rules, you >> can do trafic shaping, trafic sniffing (tcpdump), limiting >> memory used by all sockets (controlling memory pressure on the >> machine) >> >> Doing what you suggest would slow down AF_INET stack. > > Why?
Because then the AF_INET stack would have to check *every* time something went through it and see if it's bound for localhost. You're adding more complexity to the stack just to make the time on 1 case speed up, but you're slowing down every single other case.
> >> You probably can expect AF_UNIX to be faster, since this one is really >> special and use shortcuts. >> >> Then, you probably can use shared memory instead of AF_UNIX, or >> pipes (and splice()), or ... >> >> Then you probably can use threads and do zero-copy ;) > > Hmm, I'd like to avoid running my web server inside of my database > server process. ;) >
| |