Qtopia coming for Neo1973

Shawn Rutledge shawn.t.rutledge at gmail.com
Thu Sep 20 02:48:42 CEST 2007

On 9/19/07, thomas.cooksey at bt.com <thomas.cooksey at bt.com> wrote (with
the wrong kind of word-wrap, unfortunately):

> X is client server architecture which uses sockets. The server draws things on behalf of the clients. Rather than clients having to understand the X protocol, Xlib was developed to provide a drawing API. XLib is a very limited API for drawing lines, rectangles and arcs.

It covers a bit more - window allocation and manipulation, and
rendering text with bitmap fonts come to mind.

> XLib also allows clients to send a pixmap to the server to render. As time went on, line rectangles and arcs became a bit limiting so toolkits like GTK started rendering vector graphics into pixmaps and just used XLib to send those pixmaps to the server. Copying pixmaps over sockets was slow so shared memory was used instead for local clients.

That's about right.  I agree that it is very inelegant.

I think XRender does take advantage of whatever acceleration the X
driver implements.

> The GTA02 will have an SMedia grapgics accelerator. As it's not been disclosed how the SMedia chip is going to be used, I have to guess and my guess is that a KDrive server will be written which will accelerate block fills and block copies.

I sure hope they do more than that.

If there is offscreen memory, bitmap glyphs for the fonts that you are
using can be cached, and copied onto onscreen memory without much
intervention from the CPU, so even with just accelerated blitting, you
get accelerated text too.  (And the bitmaps are rendered initially by
freetype, so by using bitmaps in the cache you don't lose smoothness.)

Next I would hope they would accelerate line-drawing, and fills of
some kinds of primitives (polygons, and/or rectangles, triangles).
Bonus points for antialiasing those (but I'm sure the chip can do it,
so it shouldn't be hard).

I'd like to see accelerated Bezier curves too, but not sure if that's possible.

For OpenGL probably the most important things would be textures mapped
onto triangles, and Gouraud-shaded triangles.  If you use Mesa in
mostly-software mode and just accelerate those, it's already a big
help for many kinds of rendering.

> On the other hand, we have Qtopia. In Qtopia, an application defines QT Widgets, which are drawn using a QPaintEngine into an off-screen buffer then copied to the frame buffer using a QScreen. Writing an accelerated graphics driver is as simple as inheriting from QPaintEngine & QScreen and re-implementing the methods the hardware has acceleration for and leaving the other methods alone for software fallback. The process of writing an accelerated driver is also very well documented with some great examples to use.

Yep.  It's more direct, and I don't quite understand the argument that
X can be just as fast if you hack it in just the right ways.  There
are still more layers in an X-based architecture.  But X has other
advantages like being old and venerable, and network transparency
(which embedded devices often don't support anyway), choice of window
managers, and being able to run apps that don't use the toolkit you
happen to have chosen, or use XLib directly.  Plain XLib apps tend to
be amazingly tiny and wicked fast (just because their rendering is
simple by definition).

I think they are going to continue coexisting for a long time, and
it's fine with me if OpenMoko mainstream keeps using GTK and X
(because it works well enough), but personally I prefer working on
something new rather than depending on X and all its warts.  But
that's just me - I'm having some fun working directly with /dev/fb0
and making sure I understand what I'm writing from the bottom up.  I
hope it will be faster and take less memory.  I would also hope after
the docs come out, I can understand the chip well enough to accelerate
some operations.

I think for the ultimate in eye candy, using OpenGL pretty directly
with as few layers as possible between your OpenGL app and the
hardware would be the way to go.  (I'm pretty sure that's what Apple
does.)  But OpenGL doesn't do windowing, so you either have to write
some new windowing code or depend on X to manage that part, or just
run every app full-screen.  And almost every idea along these lines
has some sort of implementation out there somewhere already... like
Fresco, for example.


Then again:  "Do not try this at home: This setup eats up the memory
of your Zaurus and it is so slow that you can hardly call it
interactive."  :-)

I haven't tried it myself.

More information about the community mailing list