Once you've decided that a gestural interface is appropriate for your users and your environment, you need to pair the appropriate gestures to the tasks and goals the users need to accomplish. This requires a combination of three things: the available sensors and related input devices, the steps in the task, and the physiology of the human body. Sensors determine what the system can detect and how. The steps in the task show what actions have to be performed and what decisions have to be made. The human body provides physical constraints for the gestures that can be done (see Chapter 2).
For most touchscreens, this can be a very straightforward equation. There is one sensor/input device (the touchscreen); the tasks (check in, buy an item, find a location, get information) are usually simple. The touchscreen needs to be accessible and used by a wide variety of people of all ages. Thus, simple gestures such as pushing buttons are appropriate.
Note
The complexity of the gesture should match the complexity of the task at hand.
This is to say that simple, basic tasks should have equally simple, basic gestures to trigger or complete them, for instance, taps, swipes, and waves. More complicated tasks may have more complicated gestures.
Take, for example, turning on a light. If you just want to turn on a light in a room, a wave or swipe on a wall should be sufficient for this simple behavior. Dimming the light (a slightly more complex action), however, may require a bit more nuance, such as holding your hand up and slowly lowering it. Dimming all the lights in the house at once (a sophisticated action) may require a combination gesture or a series of gestures, such as clapping your hands three times, then lowering your arm. Because it is likely seldom done and is conceptually complicated, it can have an equally complex associated movement.
This is not to say that all complex behaviors need to or should have accompanying complex gestures—quite the opposite, in fact—only that simple actions should not require complex actions to initiate. The best interactive gestures are those that take the complex and make them simple and elegant.
One way to do this, especially with touchscreen devices, is to make all the features accessible with simple gestures such as taps (via a menu system, say), and then to provide alternative gestures that are more sophisticated (but faster) for more advanced users. In this way, an interactive gesture can act as a shortcut to features in much the same way as a key command works on desktop systems. Of course, communicating this advanced gesture then becomes an issue to address (see Chapter 7).
Rather than have the designer determine the gestures of the system, another method for determining the appropriate action for a gesture is to employ the knowledge and intuition of those who will use it. You can ask the users to match a feature to the gesture they would like to use to employ it. Asking several users will hopefully begin to show patterns matching gestures to features. The reverse of this would be to demonstrate a gesture and see what feature users would expect that gesture to trigger.
Japanese product designer Naoto Fukasawa has observed that the best designs are those that "dissolve in behavior,"[20] meaning that the products themselves disappear into whatever the user is doing. It's seemingly effortless (although certainly not for those creating this sort of frictionless system—intuitive, natural designs require significant effort) and a nearly subconscious act to use the product to accomplish what you want to do. This is the promise of interactive gestures in general: that we'll be able to empower the gestures that we already do and give them further influence and meaning.
Adam Greenfield, author of Everyware, talked about this type of natural interaction in an interview:[21]
"We see this, for example, in Hong Kong where women leave their RFID-based Octopus cards in their handbags and simply swing their bags across the readers as they move through the turnstiles. There's a very sophisticated transaction between card and reader there, but it takes 0.2 seconds, and it's been subsumed entirely into this very casual, natural, even jaunty gesture.
"But that wasn't designed. It just emerged; people figured out how to do that by themselves, without some designer having to instruct them in the nuances...The more we can accommodate and not impose, the more successful our designs will be."
The best, most natural designs, then, are those that match the behavior of the system to the gesture humans might already do to enable that behavior. Simple examples include pushing a button to turn something on or off, turning to the left to make your on-screen avatar turn to the left, putting your hands under a sink to turn the water on, and passing through a dark hallway to illuminate it.
The design dissolves into the behavior.
In the next chapter, we'll look at an important piece of the equation when designing gestural interfaces: the human body.
[20] See, for instance, Dwell magazine's interview with Fukasawa, "Without a Trace," by Jane Szita, September 2006, which you can find at http://www.dwell.com/peopleplaces/profiles/3920931.html.
[21] Designing for Interaction by Dan Saffer, p. 217.
Get Designing Gestural Interfaces now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.