Messages in this thread | | | From | Jesse Pollard <> | Subject | Re: Overcommitable memory?? | Date | Mon, 20 Mar 2000 05:39:48 -0600 |
| |
On Sun, 19 Mar 2000, David Whysong wrote: >On Sun, 19 Mar 2000, Jesse Pollard wrote: >>On Sun, 19 Mar 2000, James Sutherland wrote: >>>On Sat, 18 Mar 2000 08:31:58 -0600, you wrote: >>> >>>> Users (and processes) will get an Out-Of-Resource >>>> errors which the users can deal with. >>> >>>In practice, they usually just die. >> >>The user still gets the choice. > >No, this can't be allowed. What happens in the case of: > OOM -> signal processes Out of Resources > all processes decide to ignore the signal
note: these are one specific user not the system - and what happens when they ignore the signal - they are aborted.
>Once we are OOM, you can't give user-space any choices.
YOU ARN'T OOM - a specific user is out of resources, not a catastrophic failure.
>Repeat after me: > You can not solve the OOM problem in user space. > You can not solve the OOM problem without killing processes. > Resource limits are not a good solution to the OOM problem. > >I don't like resource limits. Using resource limits is similar to not >having memory overcommit -- you waste a lot of system resources "just in >case", the kernel needs to do a lot more accounting, and it's just >horribly inefficient.
Resource limits CAN prevent the OOM condition if 1. the sum of all concurrent users is <= total resources 2. users are not allowed to exceed their quota
>>>> If the proper resource allocation were given then usrs running wild >>>> memory allocators would not be able to crash the system by causing >>>> init to die. > >>>They shouldn't be able to anyway - Rik's patch should ensure that init >>>will always survive, even if every other process on the box is hosed. >>>(Of course, if init was the malfunctioning process to begin with...) > >If init malfunctions, you're hosed no matter what.
It only aborts (normal situation) if the system is allowed to go OOM.
>>>>All else is a matter of implementation. >>> >>>The core problem remains. You, the user, have a finite amount of >>>memory available to you, however that limit is defined. Once you run >>>out, your processes will start dying. > >But that's not the problem! That's the way things have to be. You can't >use more resources than there are available.
No you can't - what you are doing is "giving" access to resources you don't have. When the resources are then accessed - you die; and the system with it.
>>Yes - ME THE USER. I should not be able to cause the SYSTEM a problem. >>I should not be able to cause OTHER users a problem. > >Ahh, no! You can only do this by setting up horribly restrictive quotas >and effectively removing overcommit, which is terribly wasteful!
not horribly - It does appear that way when you have never been forced to live within a budget.
>>>We need per-user resource limits, ideally - until then, everything is >>>done with process granularity instead. This is a shortcoming we all >>>know about already, but not one that is likely to be fixed any time >>>soon. >> >>That is possibly why the OOM complaints will not go away. > >Well, my OOM complaints stem from the fact that right now OOM situations >are functionally equivalent to crashing the machine.
Can't be helped much, as long as resource control is missing. ------------------------------------------------------------------------- Jesse I Pollard, II Email: pollard@cats-chateau.net
Any opinions expressed are solely my own.
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
| |