Messages in this thread | | | From | <> | Date | Tue, 21 Mar 2000 12:04:16 -0800 (PST) | Subject | Re: Avoiding OOM on overcommit...? |
| |
On 19 Mar 2000, david parsons wrote: > In article <linux.kernel.45hadsgku4f59qae3ouohgbk7k4p6lc5os@4ax.com>, > James Sutherland <jas88@cam.ac.uk> wrote: > > >Now we'll take a WWW server, with 100 processes forked, all sharing > >most of the image. You just blew 2Gb or so of my swap space, to > >achieve - nothing. > > Okay, I'm getting really curious here: what application do you > have that requires that you run 100 copies of a web server each > with 20mb of unique writable data? > > ____ > david parsons \bi/ I simply avoid the Linux overcommit bug by dropping > \/ half a gigabyte of ram into my workstations.
I used to admin a Linux alpha box that had half a gig of RAM, half a gig of swap and was running a webserver as a front end to a computationally intensive application that would regularly chew up all available memory and OOM and hose the machine. What was really annoying is that I couldn't get process limits to work either on this box, it would happily overcommit past its process limits. I think this was RH5.2/alpha.
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
| |