Just looked at the iPhone 2.2 diffs, and the most significant add is a new class
AVAudioPlayer. Not many people seem to be messing with it yet, as a Google search only turns up 46 hits.
It looks like a huge win for a middle range of audio apps. Setting aside OpenAL, which is really for playing spatialized sound (typically in games), here’s what your options were in iPhone 2.0-2.1:
- System sounds – Objective-C – short (5 sec or less) clips loaded into memory, played at system alert volume, fire-and-forget (no ability to stop, meter, etc.)
- Audio Queue Services – C – determine the format you’re reading, create a stream description, set up an audio queue’s buffers to repeatedly call you back for audio data, fetch an appropriate number of packets on the callback and enqueue them.
It might occur to you that 1 is really easy, and 2 is quite hard (if you’re not convinced, read the AudioQueueTest sample code, or the Playing Audio section of the Audio Queue Services Programming Guide), and that more importantly, there’s a huge gulf between the two. On the Mac, this is alleviated by the presence of QuickTime and QTKit: if you really just want to play and control a sound file, open it as a QuickTime movie. Only in more advanced cases, like setting up a graph of Core Audio effects, do you need to go low-level.
But of course, there’s no QuickTime on the iPhone, so
AVAudioPlayer fills this gap. The design is highly typical of Cocoa classes: allocate one, then init with a URL or a block of data, call
stop, inspect properties like
currentTime, etc. You can provide a delegate to get notified of interruptions, errors, or the end of the audio, and it even provides some simple methods for doing level metering.
This is probably a pretty good 80-20 kind of API. A lot of people who would have had to drop down to the audio queue level are now off the hook, while the hardcore can do so if they want to do some kind of effects, parsing of the stream as it’s read in, etc.
Shoutcast-style web radio clients will presumably still have to use Audio Queue Services; I suspect that
initWithContentsOfURL:error: is so named as a hint that the URL must be read fully to parse its format and contents, and web radio streams by definition have no end. That was actually something that tripped up a lot of people on the old Java Media Framework, which as a design decision expected a file type to be parsable from a URL, which is not the case with something like
http://126.96.36.199:8000/. Instead, you have to start reading and parsing the file — with Audio File Stream Services, you throw blocks of data to Core Audio and get property callbacks when it discovers the format of the stream and figures out enough to send you a “ready to produce packets” property. I was flailing on this stuff about nine months ago and set it aside to write the book… I would like to finally get it working when I get some time. Besides, working at the low-level has its upsides: if you’re reading the raw bytes off the network, you can handle the Shoutcast metadata tags too.
Anyways, this is a lot of blather for not having even tried out
AVAudioPlayer yet. I’ll try to bang on it sometime before writing the audio chapter, which it will obviously be a part of.