If your app is going to use a more sophisticated way of producing sound, such as an audio player (discussed in the next section), it must specify a policy regarding that sound. This policy will answer such questions as: Should sound stop when the screen is locked? Should sound interrupt existing sound (being played, for example, by the iPod/Music app) or should it be layered on top of it?
Your policy is declared in an audio session, which is a singleton AVAudioSession instance created automatically as your app launches. You can configure this AVAudioSession instance once at launch time (or, at any rate, before producing any sound), or you can change its configuration dynamically while your app runs. You can talk to the AVAudioSession instance in Objective-C (see the AVAudioSession class reference) or in C (see the Audio Session Services reference), or both.
To use the Objective-C API, you’ll need to link to AVFoundation.framework and import
<AVFoundation/AVFoundation.h>. You’ll refer to your app’s AVAudioSession by way of the class method
To use the C API, you’ll need to link to AudioToolbox.framework and import
AudioSession... functions don’t require a reference to an audio session. You must explicitly initialize your audio session with
AudioSessionInitialize before talking to it with the C API, unless you have already talked to it with the Objective-C API.
The basic policies for audio playback are: