Chapter 3. Taking Control of Gesture Interaction

GERSHOM KUTLIROFF AND YARON YANAI

Reinventing the User Experience

For those of us old enough to remember a world before iPods, the computer we used when we were 15 years old looked very similar to the computer we were using when we were 35. There was a (generally boxy) console, a monitor for display, and a keyboard and mouse for input. Now, it seems we have a new class of devices every other year—smartphones, tablets, Google Glass, and now smartwatches (not to mention “phablets,” “two-in-ones,” and the various other hybrids). Many factors are driving this rapid introduction of new products, among them cheap (and plentiful) processing, new display technologies, and more efficient batteries, to name a few.

One commonality shared by all of these devices is the critical role user interaction plays in their design. Indeed, today the size of a portable device is largely limited by input/output considerations—the screen size and keyboard—and no longer by the requirements of the different technology components. As devices are further integrated into our daily activities (think “wearables”), the importance of reinventing the way we communicate with them increases.

Gesture control is an intriguing solution to this problem because it promises to enable our devices to understand us the way other people understand us. When we want to indicate an object (virtual or real), we point at it; when we want to move something, we pick it up. We don’t want ...

Get Designing for Emerging Technologies now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.