GSOC and Accelerometer Gestures Idea

Paul-Valentin Borza paulvalentin at borza.ro
Wed Mar 19 20:49:59 CET 2008


Hi,

 

My name is Paul-Valentin Borza (http://www.borza.ro) and I’m working on my Bachelor of Computer Science Thesis – Motion Gestures: An approach with continuous density hidden Markov models. I’ve designed and implemented in c++ continuous density hidden Markov models for the data measured by a 3-Axis ±3G accelerometer (the Nintendo Wii Remote over Bluetooth).

There are several alternatives to motion (accelerometer) gestures like:

Dynamic Time Warping and

Hidden Markov Models

 

I’ve tried them both; hidden Markov Models wins the race and provides much better results. I don’t expect everyone to be familiar with probability and statistics notions, but one can simply see a hidden Markov model such as a collection of states connected by transitions. Each state is characterized by two sets of probabilities: a transition probability, and either a discrete output probability distribution (used in HMMs) or continuous output probability density function (used in CDHMMs) which, given the state, defines the conditional probability of emitting each output symbol from a finite alphabet or a continuous random vector.

 

There are several key algorithms that have to be used in these models:

Forward-Backward (computes the probability of an observation sequence – the XYZ values)

Viterbi (explains the observation sequence – into states)

Viterbi Beam Search (improved Viterbi – runs faster)

Baum Welch (trains a continuous density hidden Markov model)

 

Once the models are created and trained, there are three types of recognitions:

Isolated recognition (the user presses a button, makes a gestures, releases the button and the gesture is recognized) – Viterbi Beam Search

Connected recognition (the user presses a button, makes several gestures, releases the button and those several gestures are recognized) – 2-Level

Online recognition (the user just makes a gesture as the accelerometer is monitored constantly and the gestures are recognized on the fly) – this is the one that should be used on mobile devices

 

I’ll stop here with the theory. If someone needs further clarification, please ask.

I believe I have the required skills to build the Accelerometer Gestures Idea for the Google Summer of Code.

Gestures will be an innovation in mobile phones. Just imagine the scenario where your phone is on the table and it's ringing... You pick the phone, see who's calling and you take your phone to your ear to talk (the phone answers the call automatically). The answer mobile gesture is exactly this: you take your phone to your ear. Remember that this is an involuntary action that you always perform. Pressing the green answer button will no longer be needed.

It's almost like the phone is reading you mind!

Plus, the user can create his/her own custom gestures.

 

I already have a working solution for the Nintendo Wii Remote as described earlier and it should be easy to port on OpenMoko.

What do you think?

 

Thanks

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openmoko.org/pipermail/openmoko-devel/attachments/20080319/adbf395f/attachment.html


More information about the openmoko-devel mailing list