Stroke recognizer

Shawn Rutledge shawn.t.rutledge at gmail.com
Sat Sep 1 03:36:44 CEST 2007


On 8/31/07, Carl Worth <cworth at cworth.org> wrote:
>     Often there's no active widget ready to accept text, so gestures
>     for panning/etc. are often available without ambiguity, (note how
>     the on-screen keyboard is only rarely made visible for example).

That's a good point.  When you are filling out a form, it's true (you
can drag any area which is not part of a text field, to scroll).  In
any kind of editor which takes up the entire window though, there is
still ambiguity.  I guess you could require every editor to be modal
like vi - you are in navigation mode by default, and have to do
something specific (like click without dragging, to put the text
cursor in a specific place) to get into insertion mode, at which point
the text or shape recognition works.

>     One thing the screen edges provide is infinitely large targets
>     that Fitts' Law says are really fast to hit. So if text input and
>     gesture-based scrolling do need to happen within the same context,
>     you could do things like require the scroll gesture to hit a
>     screen edge, for example.

That sounds limiting, but it's a good idea to use the edges and
corners effectively, for menus or frequently-needed buttons.  I just
wish there was a way to disambiguate recognition and panning, without
requiring the user to switch modes manually.

>     There's already talk about using tap-and-hold for
>     context-sensitive menus, but within a single diagramming program,
>     for example, you might be able to use tap-and-hold as a means to
>     request panning instead of drawing/gesturing.

I think I'd rather reserve tap-and-hold for context menus.

> But I notice that tapping in the extreme upper-left isn't activating
> the dropdown menu there right now, (I don't know if that's a hardware
> limitation of the touchscreen or just the way the software is working
> currently).

That needs to be fixed.

> > Maybe when you drag in one direction, it pans, but if you make quick
> > changes in direction, the pan state reverts to the original view that
> > you had when the stroke began, and it's in recognition mode.
>
> That might make your users quite seasick. :-)

Probably.

> Yes, I agree. I've been told that the Newton did a fairly good job of
> this years ago, (would recognize simple shapes like circles and
> squares, and used gestures like "full-screen left-to-right" to mark
> the end of one document and the beginning of another). But I've never
> had the opportunity to play with one.

I got to play with a Newton at Comdex at a public demo, but I don't
remember them pointing out that feature.  I remember that if you
"scratch out" something (e.g. a word in a sentence) it disappears in
an animated puff of smoke.  Someday I may get one on ebay just for the
ideas that it inspires.

I have a Dauphin DTR-1 with Windows for Pen 1.0 (which I haven't
turned on in years - it's an impractical machine in that the stylus
requires batteries), and it came with an application called Microsoft
Notebook, which can do that.  It can recognize shapes and handwriting
on the same canvas, without any need to switch "modes".  If you draw a
box, you get a vector rectangle that can be dragged around and
resized; likewise for an ellipse or a straight line.  If you draw
text, it turns into a text object which can be dragged around and
edited.  But that's about all that application does.  I think there
might be some similar software for modern tablet PCs.



More information about the device-owners mailing list