nWAITing for Glamo

Carsten Haitzler (The Rasterman) raster at openmoko.org
Mon Jul 14 15:37:36 CEST 2008


On Mon, 14 Jul 2008 14:26:59 +0100 Andy Green <andy at openmoko.com> babbled:

aye. this gives the nitties of what the glamo is causing in performance issues.
basically this is the crux of the "bus bandwidth to the glamo" issue. while
reading or writing to the glamo the cpu gets stalled waiting for the glamo -
limiting throughput to about 7m/sec. unfortunately for us the cpu is hung
waiting on the slow glamo when it could be off doing something more useful with
its time (even if we accepted limited write/read rates, if they could be async
we'd be much better off). this was the crux behind the DMA experiment dodji
did. the problem was that the DMA was on-soc and memory to memory and would hold
up the cpu in wait states anyway - so you don't win over using the cpu. it was
actually worse as you also have dma setup overhead etc. at best it seemed to
pull about half the throughput of copying with the cpu, and gained zero
benefits. :( unfortunately your example (the sliding top-bar) isn't even the
best example of cpu waitstates caused by the glamo. i have manufactured much
worse artificial ones :) you might want to try scrolling in exposure as another
test... :)

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Hi folks -
> 
> Decided to examine Glamo performance as a break from GPS for a while.
> 
> I don't have anything in runlevel 3 land that really taxes the Glamo, so
> I downloaded a current image of the Qtopia stuff, updated kernel to one
> with the BANKCON patch, and monitored nWAIT back to CPU on my scope
> while I made it scroll down its list of open apps.
> 
> The low parts of the attached picture is when the Glamo has stalled the
> CPU from the external bus (shared with DRAM) during the scrolling
> action, the high parts are when it was willing to take another 16-bit
> transfer.  If it's representative, looks like talking to Glamo stalls
> CPU ~75% of the time and actually does something or waits for it to be
> done in the remaining time.
> 
> Banging on that "open app" pulldown all the while, such that it
> generates artefacts on screen in fact, I found we got a considerable
> increase in animation smoothness if I forced Glamo waitstates to a very
> low level (too low a level, don't try this at home if you have rw SD
> card mounted, and expect artefacts)
> 
> ~ echo 0x200 > /sys/devices/platform/neo1973-memconfig.0/BANKCON1
> 
> so there's no doubt our CPU side waitstates impact performance too, but
> the main problem seems to be Glamo stalling us for its own internal
> arbitration / timing reasons about access to its internal DRAM.  The
> default waitstates we use
> 
> ~ echo 0x1bc0 > /sys/devices/platform/neo1973-memconfig.0/BANKCON
> 
> add up to 12 if they are all sequentially done, while the 0x200 one is
> just 3 (reducing it further just gave garbage).
> 
> - -Andy
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.9 (GNU/Linux)
> Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org
> 
> iEYEARECAAYFAkh7UsQACgkQOjLpvpq7dMr7+ACaA4xMXAkfp/qFKf0OGZZyBnlj
> 2UUAn2FEsPOwT+Binfpo63nTCxtYI+9A
> =oFgR
> -----END PGP SIGNATURE-----
> 


-- 
Carsten Haitzler (The Rasterman) <raster at openmoko.org>




More information about the openmoko-kernel mailing list