ASU - out of memory?
Carsten Haitzler (The Rasterman)
raster at openmoko.org
Fri Aug 22 02:33:26 CEST 2008
On Thu, 21 Aug 2008 18:40:52 +0200 Tilman Baumann <tilman at baumann.name> babbled:
> Carsten Haitzler (The Rasterman) wrote:
> > On Thu, 21 Aug 2008 16:44:36 +0200 Tilman Baumann <tilman at baumann.name>
> > babbled:
> >> Carsten Haitzler (The Rasterman) wrote:
> >>> On Thu, 21 Aug 2008 17:50:26 +0200 Esben Stien <b0ef at esben-stien.name>
> >>> babbled:
> >>>> Tilman Baumann <tilman at baumann.name> writes:
> >>>>> all the linux memory overcommit behaviour more or less depends on
> >>>>> the fact that it can allways save it's ass by using swap. (Instead
> >>>>> of helplessley crashing)
> >>>> Yes, or killing the application. Not having swap is nonsense;). If you
> >>>> are using swap something is wrong, right, but then you fix it. I find
> >>>> it strange that the debian install didn't make a little swap
> >>>> partition.
> >>> and luckily those smart fellas in kernel developer land.. made kernel
> >>> overcommit.. a tunable parameter! and... cunningly.. on the FR (and as wel
> >>> on my desktop) it's turned off! :) so... a moot point really. :)
> >> pardon? Honestly? This is absurd!
> >> Why? I don't get it.
> >> I mean, how did you get the impression that overcomitting is a bad thing?
> > when it makes:
> > ----
> > myptr = malloc(somesize);
> > if (myptr == NULL) return -1; // return and unwind
> > // continue as normal
> > ----
> > useless so i mayaswell do:
> > ----
> > myptr = malloc(somesize);
> > if (myptr == NULL) abort();
> > ----
> > ala glib. all that error checking code programs have becomes stupid. i
> > should just assume all mallocs succeed!. as they likely will and part way
> > through accessing the memory malloc actually returned - i may segv because
> > i dont have enough ram. that's just bad! what's the point of error codes
> > and returns then? whats the point of even trying to handle an error? :( may
> > as well throw in the towel ala glib.
> Not being able to malloc memory and not having any physical memory left
> are just two sepereate things. At least on modern linux systems.
no they are not. i write code.
myptr = malloc(mysize);
it returns failuer (NULL) then i need to deal with it.
it returns a pointer - it succeeded. i asked for memory - it gave it to me. the
problem is with overcommit success returns are lies. they MAY have the memory -
they may not. part way through using the pages it returned and SAID i could
have, it can just segv - as it is overcommitted and out of pages. this means
that suddenly return values can't be trusted for memory allocations anymore.
any attempts to handle NULL returns may as well not exist as success cases can
be undetectable failures. it's just a stupid policy. sure - its there to work
around stupid userspace code that goes off allocing massive blobs of ram that
it then never goes and uses, but the kernel should go punish all apps for this
- those apps being stupid should be punished and have their code fixed.
> Memory overcommit saves (physical) memory. And not just a bit.
> And with some swap safety net it is reasonably save to do so.
> And how should it be better if _any_ app gets knocked out by running in
> the memory limit? Usually it can't help the situation. Especially if
> some other app that is eating all memory.
> The effect is more or less the same. Some random poor app has to be
> killed. (Since suicide is often the only way to react to malloc fails)
> Turning overcommit off is in my eyes only a poor 'bury one's head in the
> sand' solution which effectively does improve nothing.
i disagree. overcommit is a "bury head in sand" solution. it means you just go
and avoid the original problem of allocing much more memory than you really
> Overcommit and swap is just the winner. Everyone does it so. And in my
> eyes rightly so. It is fast and efficient.
> And even without swap, i would not want to turn overcommit off.
> Swap and overcommit is just the dream team. Fast and efficient memory.
> And if something goes wrong, you have plenty of time (and mem) for
> solving the problem.
> Contrary to without, because how could you fix any low memory condition
> with not allocating any more memory? Driving something into swap to be
> able to do something about it is just right.
> And just if you thought so. Overcommit is not just lazy don't care for
> errors behaviour but a really smart optimisation.
wrong. it means i can no longer trust a successful malloc() calloc() realloc()
or even alloca() return. ever. the return of a valid pointer which according to
the manuals provided ever since for these calls, could be an error state too.
and thew only way to handle it is have your own sigsegv/bus handlers and on a
segv which accesses a known allocated chunk of memory, realise that overcommit
just screwed you and try and save yourself (and frankly by this stage - you
may as well just segv, exit or try re-exec yourself freshly).
is is not elegant. it is not good. it is a cute hack that makes badly-behaved
apps not impact things as much. it just sticks the head in the sand and doesn't
go and fix the apps. it means the kernel tries to pick up the badly bloated
pieces for you and tends to punish all apps as a result.
> For example it saves memory by only copying a page if the page was
> written. (copy on write)
> For example:
> If you fork, the memory space is duplicated. You end up with twice the
> memory. But linux does not copy the memory, it overcommits.
> Only if a page is really changed it get duplicated first.
> This is elegant, fast and efficient. But yes, you loose the context of a
> error condition if some happens. But i say, if a system runs out of
> memory, letting a program know that there is no memory left helps
> nothing. The poor program that gehts the error is usually neither able
> to save much or even to solve the system wide problem.
> Drucken Sie diese Mail bitte nur auf Recyclingpapier aus.
> Please print this mail only on recycled paper.
Carsten Haitzler (The Rasterman) <raster at openmoko.org>
More information about the community