Messages in this thread | | | From | "Terry Katz" <> | Subject | 2.3.x + memory question | Date | Tue, 30 Nov 1999 20:25:02 -0500 |
| |
I've been away from the list for some time, so I'm sorry if this has already been brought up ...
We've been trying to run web stats on especially large log files (some totalling 8gig in size), and have been noticing that the stats program seg faults at roughly 900meg in resident size. After performing some tests, I found was able to allocate only 882 meg before a seg fault occured (1gig physical ram, and 4gig swap.. the swap was never used in the test). Running the test across two processes i was able to allocate 500meg each with the swap then being used (the test was just to see if the system would go to swap, so I never tried it higher). On another system, with 2gig of memory, I was only able to allocate roughly 540meg before segfaulting (more loaded system, but approx 1.3gig was free before running the test)...
I'm assuming that this is a known issue .. or feature .. or setting .. or ... ?? Is there anyway that a single process can allocate all available memory? (with the new 'high-memory' feature of 2.3 and >2g file sizes, we'd like to move our web stats generation over from an irix box to our linux machines .. log files totalling 8gig in size and stats utilizing at least a gig of memory when generating)... Now that we got the 8gig file over .... we need to process it :)
Thanks, Terry
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
| |