GSoC Accelerometer-based Gestures Update
Alexey Feldgendler
alexey at feldgendler.ru
Wed Jun 4 11:13:15 CEST 2008
On Wed, 04 Jun 2008 08:51:54 +0200, Paul-Valentin Borza
<paulvalentin at borza.ro> wrote:
> The classifier is based on multivariate gaussian mixtures. I've
> written the discriminant functions and just finished writing the
> estimation/training functions (yes, this classifier also needs to
> be trained). The classifier is great because it can adapt/learn
> new values on the fly while in use. So, when this classifier
> classifies enough frames as gestures/motion, it passes these
> frames to the recognizer that has a unigram grammar of hidden
> Markov models.
One thing that worries me is whether continuous recognition is something
feasible within Freerunner's battery life and CPU time constraints. Even
if the gesture recognizer manages to put the device to sleep when there is
no signal and wake up on motion, there will still be a lot of idle
processing to do while the user is walking.
> I will have to test and train the classifier in the next two days and
> then I'll check in the code to SVN.
An interesting question: will gestures like “picking up the phone”
naturally done by other people similar enough to yours, or will it have to
be trained by a specific owner? You could test that by seeing if it
recognizes gestures as made by your family members.
> By the way, it would be great if you guys would have ideas for
> gestures. Although creating a new gesture will be trivial, it would be
> good for me to know how many I have to create. The demo app will
> enable auto-answer when picked up from the table, auto-hang-up and
> snooze the alarm when hit. Any more ideas?
Is the recognizer able to factor out rotation of the coordinate system? In
other words, will it recognize the same gesture if the phone is turned
compared to the orientatin used in training, e.g. if the snooze gesture is
done from another side?
Some gesture ideas:
* Turning the phone face to the user (not the same as taking it to the
ear) to turn on the backlight
* Automatic portrait/landscape switching for the UI
* Turning the phone screen down to mute sound (and probably turn off the
backlight) or hold call
* Swinging in an O-shape in the air to redial
* Moving the phone in a firm gesture from one ear to the other to switch
between active and held calls
* Scrolling with firm tilts (suggested several times, should see if it's
usable)
* Dropping (suggested several times, though it's unclear how to react to
it)
* Shaking to get audio feedback (could e.g. imitate balls rolling inside
to the number of unread messages, or liquid splashing to incdicate the
battery level)
* Starting driving in a car (if that's detectable -- probably has other
patterns than walking etc) to switch to some “car mode”
* Stopping e.g. at a traffic light to choose a better time to notify about
new messages than while driving
* Taking off in a plane (should be detectable, but hard to train) to shut
down all RF systems
* Similarly, landing to re-enable RF systems
--
Alexey Feldgendler <alexey at feldgendler.ru>
[ICQ: 115226275] http://feldgendler.livejournal.com
More information about the openmoko-devel
mailing list