Stroke recognizer

Shawn Rutledge shawn.t.rutledge at gmail.com
Fri Aug 31 22:21:10 CEST 2007


On 8/31/07, Carl Worth <cworth at cworth.org> wrote:
> But of course, doing that will require finding a way to resolve
> full-screen recognition with novel UI things like the inertial
> scrolling which rely on drag events. Fun stuff to explore anyway.

I was thinking that too - there are several uses for gestures now, and
they tend to conflict.  What do you think about it?  How would you
prefer to distinguish panning from handwriting?

Personally I'm interested in shape recognition too; I want to build a
drawing/diagramming tool some day which can recognize shapes, within
some kind of context (the palette of shapes in use).

If the phone had more buttons we could have the user hold a button
down, to get one behavior or the other (either recognition mode or
navigation mode).  But in the post-iPhone era (which we are entering)
everyone is probably going to expect panning to be the primary use of
gestures.

Maybe when you drag in one direction, it pans, but if you make quick
changes in direction, the pan state reverts to the original view that
you had when the stroke began, and it's in recognition mode.  Or you
have to drag for some distance before panning can begin.  But either
of these methods might still feel kindof awkward, not responsive
enough.

Or handwriting/keyboarding has to be done in a dedicated area.  But
for shape recognition I think that is not acceptable; I want to do it
right on the drawing canvas.



More information about the device-owners mailing list