I’ve done something like this before, when I completed parts 1 and 2 of the audio series. I just sent off the first draft of part 3, and I’ve got OpenAL on the brain.
Basics: create a device with
alcOpenDevice(NULL);(iPhone has only one AL device, so you don’t bother providing a device name), then create a context with
alcCreateContext(alDevice, 0);, and make it current with
Creating a context implicitly creates a “listener”. You create “sources” and “buffers” yourself. Sources are the things your listener hears, buffers provide data to 0 or more sources.
Nearly all AL calls set an error flag, which you collect (and clear) with alGetError(). Do so. I just used a convenience method to collect the error, compare it to
AL_NO_ERRORand throw an
NSExceptionif not equal.
That sample AL code you found to load a file and play it with AL? Does it use
alutLoadWAVFile()? Too bad; the function is deprecated, and ALUT doesn’t even exist on the iPhone. If you’re loading data from a file, use Audio File Services to load the data into memory (an
CFMutableDataRefmight be a good way to do it). You’ll also want to get the
kAudioFilePropertyDataFormatproperty from the audio file, to help you provide the audio format to OpenAL.
Generate buffers and sources with
alGenSources(), which are generally happier if you send them an array to populate with ids of created buffers/sources.
Most of the interesting stuff you do with sources, buffers, and the listener is done by setting properties. The programmer’s guide has cursory lists of valid properties for each. The getter/setter methods have a consistent naming scheme:
Getfor getters, nothing for setters. Yes, comically, this is the opposite of Cocoa’s getter/setter naming convention.
Listener: the kind of AL object you’re working with
3for setters that set 3 values (typically an X/Y/Z position, velocity, etc.), nothing for single-value or vector calls
v(“vector”) if getting/setting multiple values by passing a pointer, nothing if getting/setting only one value. Never have both
alSourcei()to set a single int property,
alSource3i()to set three ints,
alGetFloatv()to get an array of floats (as an
Most simple examples attach a single buffer to a source, by setting the
AL_BUFFERproperty on a source, with the buffer id as the value. This is fine for the simple stuff. But you might outgrow it.
3D sounds must be mono. Place them within the context by setting the
AL_POSITIONproperty. Units are arbitrary – they could be millimeters, miles, or something in between. What matters is the source property
AL_REFERENCE_DISTANCE, which defines the distance that a sound travels before its volume diminishes by one half. Obviously, for games, you’ll also care about sources’
AL_DIRECTION, and possibly some of the more esoteric properties, like the sound “cone”.
Typical AL code puts samples into a buffer with
alBufferData. This copies the data over to AL, so you can
freeyour data pointer once you’re done. This is no big deal for simple examples that only ever load one buffer of data. If you stream (like I did), it’s a lot of unnecessary and expensive
memcopying. Eliminate with Apple’s standard extension
alBufferDataStatic, which eliminates the copy and makes AL read data from your pointer. Apple talks up this approach a lot, but it’s not obvious how to compile it into your code: they gave me the answer on the
To make an AL source play arbitrary data forever (e.g., a radio in a virtual world that plays a net radio station), you use a streaming API. You queue up multiple buffers on a source with
alSourceQueueBuffers(), then after the source is started, repeatedly check the source’s
AL_PROCESSEDproperty to see if any buffers have been completely played through. If so, retrieve them with
alSourceUnqueueBuffers(), which receives a pointer to the IDs of one or more used buffers. Refill with new data (doing this repeatedly is where
alBufferDataStaticis going to be your big win) and queue it again on the buffer with
On the other hand, all you get back when you dequeue is an ID of the used buffer: you might need to provide yourself with some maps, structures, ivars, or other data to tell you how to refill that (what source you were using it on, what static buffer you were using for that AL buffer, etc.)
This isn’t a pull model like Audio Queues or Audio Units. You have to poll for processed buffers. I used an
NSTimer. You can use something more difficult if you like.
alSourceStop(). To make multiple sources play/pause/stop in guaranteed sync, use the
vversions of these functions that take an array of source IDs.
You’re still an iPhone audio app, so you still have to use the Audio Session API to set a category and register an interruption handler. If you get interrupted, set the current context to
NULL, then make a new call to
alMakeContextCurrent()if the interruption ends (e.g., the user declines an incoming call). This only works for iPhone OS 3.0; in 2.x, it’s a bag of hurt: you have to tear down and rebuild everything for interruptions.
That’s about all I’ve got for now. Hope you enjoy the article when it comes out. I’ve had fun pushing past the audio basics and into the hard parts.