nWAITing for Glamo

Andy Green andy at openmoko.com
Mon Jul 14 15:26:59 CEST 2008

Hash: SHA1

Hi folks -

Decided to examine Glamo performance as a break from GPS for a while.

I don't have anything in runlevel 3 land that really taxes the Glamo, so
I downloaded a current image of the Qtopia stuff, updated kernel to one
with the BANKCON patch, and monitored nWAIT back to CPU on my scope
while I made it scroll down its list of open apps.

The low parts of the attached picture is when the Glamo has stalled the
CPU from the external bus (shared with DRAM) during the scrolling
action, the high parts are when it was willing to take another 16-bit
transfer.  If it's representative, looks like talking to Glamo stalls
CPU ~75% of the time and actually does something or waits for it to be
done in the remaining time.

Banging on that "open app" pulldown all the while, such that it
generates artefacts on screen in fact, I found we got a considerable
increase in animation smoothness if I forced Glamo waitstates to a very
low level (too low a level, don't try this at home if you have rw SD
card mounted, and expect artefacts)

~ echo 0x200 > /sys/devices/platform/neo1973-memconfig.0/BANKCON1

so there's no doubt our CPU side waitstates impact performance too, but
the main problem seems to be Glamo stalling us for its own internal
arbitration / timing reasons about access to its internal DRAM.  The
default waitstates we use

~ echo 0x1bc0 > /sys/devices/platform/neo1973-memconfig.0/BANKCON

add up to 12 if they are all sequentially done, while the 0x200 one is
just 3 (reducing it further just gave garbage).

- -Andy
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

-------------- next part --------------
A non-text attachment was scrubbed...
Name: f0020tek.png
Type: image/png
Size: 6185 bytes
Desc: not available
Url : http://lists.openmoko.org/pipermail/openmoko-kernel/attachments/20080714/a1c5b3d2/attachment.png 

More information about the openmoko-kernel mailing list