[omgps] collect feature requests
meng.qingyou at gmail.com
Tue Jun 30 22:40:29 CEST 2009
I've developed concurrent Java programs on sever, ever experienced problem
of file descriptor out of per process limit. Haven't seen it crash file
system on server or desktop boxes.
By default the fd limit is 1024 (see it with ulimit -a), If that's the
cause, you would fail even if with battery plugged.
I don't think find or wc open that much file descriptors at all. My guess:
to read dir entries in a directory, open the direcotory, then read inodes in
this directory, then close the directory. Of course at least a pipe is
My test with omgps running shows no failure, with 26954 image files.
You can add a swap file /partition then test again, to see if limited memory
causes this problem. I've watched with `vmstat 1`, seen limited memory
Here is my clues:
#1: max current of usb power supply is 500mA, with heavy CPU load, uSD card
may not get enough power then fails to work.
#2: Many people experienced the "lose partition problem" including myself, I
can remember somebody asked that "why GPS hurt uSD card?".
Laszlo KREKACS wrote:
> Noone experienced whole filesystem crash because of *that* many open
> file descriptors?
> It would be really strange. ITs really simple to test it:
> While using tangogps/omgps remove the battery.
> Almost 90% percent and the whole filesystem crashes
> (the tiles are no more availables)
> You can test it, with: find /home/root/Maps -name *.png |wc -l
> I bet it will hang.
> So I request the following feature:
> Instead of having 75000 file for 118MB, compress the
> tiles into reasonable 1MB files. So 118 files in total in place of
> 75000 files.
> Anyone agree?
> Openmoko community mailing list
> community at lists.openmoko.org
View this message in context: http://n2.nabble.com/-omgps--collect-feature-requests-tp3178254p3185152.html
Sent from the Openmoko Community mailing list archive at Nabble.com.
More information about the community