[qtmoko] New significant speedups coming to FreeRunner

Thomas White taw at bitwiz.org.uk
Tue Feb 16 16:49:30 CET 2010


On Tue, 16 Feb 2010 16:19:08 +0100
David Garabana Barro <david at garabana.com> wrote:

> >Now i just change a few kernel config options and few line patch (thanks to 
> >Thomas White) and the graphics speed is very nice. In QVGA it can probably 
> >match iPhone or any Android device.
> 
> No, it can't, at least until we have an OpenGL driver. But it's true that using 
> VGA resolution is a handicap for such a slow graphics chip, and it would be 
> better QVGA for this hardware.

A small point, but there are things we can do along the way to a full
GL driver which speed things up, and I don't think we've found them all
just yet.  For instance, adding proper fencing in the DRM driver
unclogs things by a fairly noticable amount: fullscreen (VGA) blits at
100fps with 0% CPU usage, anyone?

> Fact is that glamo is a graphics "decelerator". It's known that Neo1973 was 
> faster than FreeRunner on graphics (even on VGA), despite of slower processor.

Yes, the bus speed is a fundamental limitation, and it does suck.
But there are other reasons (see below) why the current driver and
rendering model is a bad match for the hardware.  In fact, it's a bad
match for almost all hardware, it's just that normally the overall
speed is high enough to get away with it.  We haven't yet allowed
ourselves to make meaningful use of the acceleration features, and I'm
absolutely convinced that if we did so then the GTA02's UI could fly
along.  It's a fact that to get to this state we're going to have to
write a lot of hardware-specific code, and each developer who would
potentially work on this stuff has to make their own decision about
whether they want to do that for a GPU which won't be found elsewhere.

Extract from
http://lists.shr-project.org/pipermail/shr-devel/2009-December/001702.html
- see the thread for context.
--->
If you're only talking about the X protocol overhead, then that's true
- although I haven't yet seen any numbers...

However, it's not the driver's fault.  By the time (say) GTK's rendering
instructions get to our driver (i.e. xf86-video-glamo), they've been
turned into a series of tiny rectangle operations which are almost
impossible to accelerate in any useful way.  In this sense, the way X
requires programs to send their rendering commands, and the way
GTK/Cairo sends its commands, and the way the X server core communicates
with the driver, are hurting us.

Essentially, that's why E is so much faster: it prepares larger chunks
of data at a higher level where acceleration can be much more
meaningful, then sends them to the server in one big block.  The price
of this is that the acceleration done by the driver is hardly used in
most cases, so we still don't get the best out of our hardware.

A more fundamental redesign could potentially allow such pitfalls
to be side-stepped, but this also comes at a price:  Hardware-dependent
code would end up existing at a higher level in the software [1],
reducing the reusability of code.

[1] In the extreme case, hardware-dependent code can be moved all the
way up the the individual client program, abstracted by a library.
This is what DRI does, in which case that abstraction library is
usually Mesa, providing an OpenGL API.
<---

My decision about this was simple:  Since I enjoy the development work,
it doesn't make any difference to me that the hardware will go away in
time.  Nothing is forever, and this is a perfect opportunity to learn
about driver development on a relatively tame piece of hardware.  I
don't have any immediate plans for world domination [2]..

Tom

[2] ... or is that just what I want you to believe?  Mwahahahaha...

-- 
Thomas White <taw at bitwiz.org.uk>



More information about the community mailing list