Touch has a memory.
Put a young child in front of a computer and see her reach for the screen to grab moving pixels. Touching an element to impact it is the most natural form of input.
Over the past few years, there has been an explosion of innovation utilizing the latest research in haptic technology, including everything from touch-based computers such as the iPhone and Microsoft Surface to tangible user interfaces such as the Reactable to gesture-based motion tracking as in Microsoft’s Kinect.
The keyboard and the mouse that we have grown accustomed to using to communicate with our digital tools are perhaps now becoming outdated. The human hand, and even the entire human body, may be the interaction method of the future.
This is what we will cover in this chapter.
The first multitouch system designed for human input was developed in 1982 by Nimish Mehta of the University of Toronto. Bell Labs, followed by other research labs, soon picked up on Mehta’s idea. Apple’s 2007 launch of the iPhone, which is still the point of reference today for multitouch experiences and gestures, popularized a new form of user interaction.
More recently, Microsoft launched Windows 7, Adobe added multitouch and gesture capability to Flash Player and AIR, and a range of smartphones, tablets, and laptops that include multitouch sensing capability have become available or are just entering the market. Common devices such as ATMs, DVD rental kiosks, ...