GSoC Accelerometer-based Gestures Update

Paul-Valentin Borza paulvalentin at borza.ro
Fri Jun 6 10:59:41 CEST 2008


Thanks for the ideas. I will compile and write the ideas on
http://wiki.openmoko.org/wiki/Accelerometer-based_Gestures

So, let me answer some of your questions:

To Alexey:
You're right with the battery life and CPU. Continuous recognition
shouldn't be though as something that is always processing in the
background. When I say continuous recognition I mean that you tell me
when to start processing and I'll tell you when I'm finished (i.e. you
don't have to tell me when I'm finished like in the isolated
recognition mode).
The recognition will start upon an event (e.g. incoming call) and the
end-point detector will say when the gesture was finished and pass the
frames to the hidden Markov model-based recognizer.
If I were to continuously process the accelerometer data, I'll
probably have emptied the battery within hours of use. And we clearly
don't want that.

Normally, you wouldn't have to train the gestures. I will train them
when I'll have enough data. Might be a good thing to gather data from
the community with a custom made app and train the gestures. However,
if you'll feel that the recognition rate isn't good enough you can
always train them yourself with a future app with UI that I'll make
(second part of GSoC). You'll be able to train them in the cmd line
this month. I have to say that there are 2 modes of training: one when
you increase the likelihood of a single model and you train a single
model and another one when you train all models at the same time to
increase the discriminant power of a single model and decrease the
others for that observation sequence (the sequence of XYZ data).

The classifier processes the so called feature vector which is
composed of the magnitude, delta-magnitude between frame t and t-1 and
t and t-2. The HMM-recognizer processes XYZ data, so a gesture done
with the phone facing upwards will be different between a gesture done
with the phone downwards. You'll have to make different gestures for
the same action, because the axes measure different values (more, it
measures exactly opposite values).

To Tilman:
Once I'll receive the FreeRunner, I'll make some tests on the battery
life and decide the best option. Like you, I think that we shouldn't
make excessive use of gestures. Let's put gestures in every single app
would be a disaster - gestures should be intuitive. Not every command
in an app should be linked to a gesture. And yes, you're right every
recognizer displays the recognized gesture on the screen - I'll make a
notofication on the screen once a gesture was recognized successfully
or not. Grammars are always based on context, there's no point on
processing gestures that aren't mapped in applications.

To Bobby:
It is indeed an exciting project, but I still got lots of work. I hope
I'll finish the classifier today. I decided to go with a .conf file
for each gesture context and now I have to read it. Tomorrow I'll
train the classifier with some data and upload to svn the classifier.
Next week I'll start working (actually porting) my HMM recognizer.

Sorry for the late response, I had my graduation party on Wednesday
and it was an 11-hour party and I was wasted on Thursday and couldn't
do much.

I'll upload your ideas on
http://wiki.openmoko.org/wiki/Accelerometer-based_Gestures (keep in
touch)

Thanks,
Paul



More information about the openmoko-devel mailing list