ASU - out of memory?
tilman at baumann.name
Thu Aug 21 18:40:52 CEST 2008
Carsten Haitzler (The Rasterman) wrote:
> On Thu, 21 Aug 2008 16:44:36 +0200 Tilman Baumann <tilman at baumann.name> babbled:
>> Carsten Haitzler (The Rasterman) wrote:
>>> On Thu, 21 Aug 2008 17:50:26 +0200 Esben Stien <b0ef at esben-stien.name>
>>>> Tilman Baumann <tilman at baumann.name> writes:
>>>>> all the linux memory overcommit behaviour more or less depends on
>>>>> the fact that it can allways save it's ass by using swap. (Instead
>>>>> of helplessley crashing)
>>>> Yes, or killing the application. Not having swap is nonsense;). If you
>>>> are using swap something is wrong, right, but then you fix it. I find
>>>> it strange that the debian install didn't make a little swap
>>> and luckily those smart fellas in kernel developer land.. made kernel
>>> overcommit.. a tunable parameter! and... cunningly.. on the FR (and as wel
>>> on my desktop) it's turned off! :) so... a moot point really. :)
>> pardon? Honestly? This is absurd!
>> Why? I don't get it.
>> I mean, how did you get the impression that overcomitting is a bad thing?
> when it makes:
> myptr = malloc(somesize);
> if (myptr == NULL) return -1; // return and unwind
> // continue as normal
> useless so i mayaswell do:
> myptr = malloc(somesize);
> if (myptr == NULL) abort();
> ala glib. all that error checking code programs have becomes stupid. i should
> just assume all mallocs succeed!. as they likely will and part way through
> accessing the memory malloc actually returned - i may segv because i dont have
> enough ram. that's just bad! what's the point of error codes and returns then?
> whats the point of even trying to handle an error? :( may as well throw in the
> towel ala glib.
Not being able to malloc memory and not having any physical memory left
are just two sepereate things. At least on modern linux systems.
Memory overcommit saves (physical) memory. And not just a bit.
And with some swap safety net it is reasonably save to do so.
And how should it be better if _any_ app gets knocked out by running in
the memory limit? Usually it can't help the situation. Especially if
some other app that is eating all memory.
The effect is more or less the same. Some random poor app has to be
killed. (Since suicide is often the only way to react to malloc fails)
Turning overcommit off is in my eyes only a poor 'bury one's head in the
sand' solution which effectively does improve nothing.
Overcommit and swap is just the winner. Everyone does it so. And in my
eyes rightly so. It is fast and efficient.
And even without swap, i would not want to turn overcommit off.
Swap and overcommit is just the dream team. Fast and efficient memory.
And if something goes wrong, you have plenty of time (and mem) for
solving the problem.
Contrary to without, because how could you fix any low memory condition
with not allocating any more memory? Driving something into swap to be
able to do something about it is just right.
And just if you thought so. Overcommit is not just lazy don't care for
errors behaviour but a really smart optimisation.
For example it saves memory by only copying a page if the page was
written. (copy on write)
If you fork, the memory space is duplicated. You end up with twice the
memory. But linux does not copy the memory, it overcommits.
Only if a page is really changed it get duplicated first.
This is elegant, fast and efficient. But yes, you loose the context of a
error condition if some happens. But i say, if a system runs out of
memory, letting a program know that there is no memory left helps
nothing. The poor program that gehts the error is usually neither able
to save much or even to solve the system wide problem.
Drucken Sie diese Mail bitte nur auf Recyclingpapier aus.
Please print this mail only on recycled paper.
More information about the community