GSoC 2008

Niluge kiwi kiwiiii at gmail.com
Tue Mar 25 17:37:27 CET 2008


Somebody in the thread at some point said:
> The raw accelerometer data is predicated around byte data for X Y Z per
> sample per motion sensor.

> Ultimately the "gesture recognition" action is about eating 200 3-byte
> packets of data a second and issuing only one or two bytes per second
> about any "gesture" that was seen.

The Spec document for the accelerometers says the refresh rate can be
chosen: 100Hz, or 400Hz. The three values are stored as 2's complement
number in one byte for each axis.


Somebody in the thread at some point said:
> Or a rotating shake with axis long side: it looks completely different when
> device is upright, 45°, or flat (in fact this gesture isn't detectable at all
> in upright position, in the first place).
> Only correct way is to calculate real heading and velocity vector of device,
> as accurate as possible. Then accumulate a "route", and this route you may
> compare to a set of templates for best match (after transforming for size and
> orientation). All this is very low accuracy, so 16bit integer and sine-tables
> will do easily, but i don't think it will become much cheaper than this.
> That is, if the gestures are more complex than just "one sharp shake" "2
> gentle shakes" etc.

With the two accelerometers, we can calculate the position(and the
velocity) of the two accelerometers from a start point (position and
velocity). But this is not enough to have the position of the whole
phone in the space : we don't know the rotation movement along the
axis defined by the two accelerometers.
I didn't managed to open and view the FreeRunner hardware source
files(the software seems to be closed source and not free), so I don't
know the position of the two accelerometers on the phone, but I hope
they are well placed (so that we don't really need the unknown angle).
( I also tried to see the chips on the motherboard shots available on
the wiki, but didn't found them...)

As we know that the relative position of the two accelerometers is
fixed, it could help to detect calculation errors, and maybe correct
them (a little...).



Regarding the work on a MPU, if I've understood what I've read on the
mailing list archives, it's still just and idea, and the FreeRunner
wont have one, am I right?

For the GSoC, I think working on a simple library which uses the CPU
would be already a good thing. (but we can work with the idea in mind
that the code will need to be ported for a MPU).


The library could provide two things :
* the recognition of typical gestures
* the position and velocity evolutions in time

I don't know yet if it is necessary or not to calculate the latter to
obtain the first :it probably depends on the complexity of the
gestures to recognize, so we could divide the gestures in two groups :
the simples ones, like a fast acceleration in any direction, and more
complex ones: a Pi/2 rotation (landscape mode).
We should also use the "click" and "double click" recognition(in each
axis) already provided by the chip itself because it needs no cpu at
all.
The hardware also provide a free-fall detection.


If we could build such a library, it could allow to create so many
things ( the gestures for an easier interface, and the position and
velocity for games, but not only ).




More information about the community mailing list