cworth at cworth.org
Sat Sep 1 01:28:49 CEST 2007
On Fri, 31 Aug 2007 13:21:10 -0700, "Shawn Rutledge" wrote:
> I was thinking that too - there are several uses for gestures now, and
> they tend to conflict. What do you think about it? How would you
> prefer to distinguish panning from handwriting?
I touched on several possibilities in the paper I wrote, (and
implemented some of them within xstroke). But briefly, off the top of
my head here are some ways to let both work together:
Often there's no active widget ready to accept text, so gestures
for panning/etc. are often available without ambiguity, (note how
the on-screen keyboard is only rarely made visible for example).
* Screen edges
There have been several complaints about the problems of the
recessed screen in the current neo devices, but it actually has
some good uses as well. [I'm talking about stylus-based use not
finger-based use, since otherwise I don't think character
recognition is interesting anyway.]
One thing the screen edges provide is infinitely large targets
that Fitts' Law says are really fast to hit. So if text input and
gesture-based scrolling do need to happen within the same context,
you could do things like require the scroll gesture to hit a
screen edge, for example.
There's already talk about using tap-and-hold for
context-sensitive menus, but within a single diagramming program,
for example, you might be able to use tap-and-hold as a means to
request panning instead of drawing/gesturing.
> If the phone had more buttons we could have the user hold a button
> down, to get one behavior or the other (either recognition mode or
> navigation mode).
Fitts' Law also says that the corners would be ideal places to put
some frequently used buttons/toggles, (again I'm focusing on stylus
input, not finger input). If there was no careful aiming required,
this could be pretty darn fast.
But I notice that tapping in the extreme upper-left isn't activating
the dropdown menu there right now, (I don't know if that's a hardware
limitation of the touchscreen or just the way the software is working
> Maybe when you drag in one direction, it pans, but if you make quick
> changes in direction, the pan state reverts to the original view that
> you had when the stroke began, and it's in recognition mode.
That might make your users quite seasick. :-)
> Or handwriting/keyboarding has to be done in a dedicated area. But
> for shape recognition I think that is not acceptable; I want to do it
> right on the drawing canvas.
Yes, I agree. I've been told that the Newton did a fairly good job of
this years ago, (would recognize simple shapes like circles and
squares, and used gestures like "full-screen left-to-right" to mark
the end of one document and the beginning of another). But I've never
had the opportunity to play with one.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 189 bytes
Desc: not available
Url : http://lists.openmoko.org/pipermail/device-owners/attachments/20070831/4804e54e/attachment.pgp
More information about the device-owners