Prior to iPhone OS 3.2, developers had to write their own code to detect various touch events in their applications. For instance, the default iPhone Photo application allows the user to zoom into and out of a photo using two fingers while “pinching” the photo in and out. The mechanism to detect these gestures was never a concrete and reusable class, and developers had to create their own gesture recognizers. With the introduction of iOS 3.2, some of the most common gesture event detection code is encapsulated into reusable classes built into the SDK. These classes can be used to detect swipe, pinch, pan, tap, drag, long press, and rotation gestures.
Gesture recognizers must be added to instances of the
UIView class. A single view can have more than one gesture recognizer.
Once a view catches the gesture, that view will be responsible for passing
down the same gesture to other views in the hierarchy if needed.
Some touch events required by an application might be complicated to process and might require the same event to be detectible in other views in the same application. This introduces the requirements for reusable gesture recognizers. There are six gesture recognizers in iOS SDK 3.2 and later:
The basic framework for handling a gesture through a built-in gesture recognizer is as follows:
Create an object of the right data type for the gesture recognizer you want.
Add this object as a gesture ...