I’ve done something like this before, when I completed parts 1 and 2 of the audio series. I just sent off the first draft of part 3, and I’ve got OpenAL on the brain.
-
Docs on the OpenAL site. Go get the programmer’s guide and spec (both links are PDF).
-
Basics: create a device with
alcOpenDevice(NULL);
(iPhone has only one AL device, so you don’t bother providing a device name), then create a context withalcCreateContext(alDevice, 0);
, and make it current withalcMakeContextCurrent (alContext);
. -
Creating a context implicitly creates a “listener”. You create “sources” and “buffers” yourself. Sources are the things your listener hears, buffers provide data to 0 or more sources.
-
Nearly all AL calls set an error flag, which you collect (and clear) with alGetError(). Do so. I just used a convenience method to collect the error, compare it to
AL_NO_ERROR
and throw anNSException
if not equal. -
That sample AL code you found to load a file and play it with AL? Does it use
loadWAVFile
oralutLoadWAVFile()
? Too bad; the function is deprecated, and ALUT doesn’t even exist on the iPhone. If you’re loading data from a file, use Audio File Services to load the data into memory (anNSMutableData
/CFMutableDataRef
might be a good way to do it). You’ll also want to get thekAudioFilePropertyDataFormat
property from the audio file, to help you provide the audio format to OpenAL. -
Generate buffers and sources with
alGenBuffers()
andalGenSources()
, which are generally happier if you send them an array to populate with ids of created buffers/sources. -
Most of the interesting stuff you do with sources, buffers, and the listener is done by setting properties. The programmer’s guide has cursory lists of valid properties for each. The getter/setter methods have a consistent naming scheme:
al
Get
for getters, nothing for setters. Yes, comically, this is the opposite of Cocoa’s getter/setter naming convention.Buffer
,Source
, orListener
: the kind of AL object you’re working with3
for setters that set 3 values (typically an X/Y/Z position, velocity, etc.), nothing for single-value or vector callsi
forint
(technicallyALint
) properties,f
forALFloat
sv
(“vector”) if getting/setting multiple values by passing a pointer, nothing if getting/setting only one value. Never have both3
andv
.
Examples:
alSourcei()
to set a single int property,alSource3i()
to set three ints,alGetFloatv()
to get an array of floats (as anALFloat*
). -
Most simple examples attach a single buffer to a source, by setting the
AL_BUFFER
property on a source, with the buffer id as the value. This is fine for the simple stuff. But you might outgrow it. -
3D sounds must be mono. Place them within the context by setting the
AL_POSITION
property. Units are arbitrary – they could be millimeters, miles, or something in between. What matters is the source propertyAL_REFERENCE_DISTANCE
, which defines the distance that a sound travels before its volume diminishes by one half. Obviously, for games, you’ll also care about sources’AL_VELOCITY
,AL_DIRECTION
, and possibly some of the more esoteric properties, like the sound “cone”. -
Typical AL code puts samples into a buffer with
alBufferData
. This copies the data over to AL, so you canfree
your data pointer once you’re done. This is no big deal for simple examples that only ever load one buffer of data. If you stream (like I did), it’s a lot of unnecessary and expensivememcopy
ing. Eliminate with Apple’s standard extensionalBufferDataStatic
, which eliminates the copy and makes AL read data from your pointer. Apple talks up this approach a lot, but it’s not obvious how to compile it into your code: they gave me the answer on thecoreaudio-api
list. -
To make an AL source play arbitrary data forever (e.g., a radio in a virtual world that plays a net radio station), you use a streaming API. You queue up multiple buffers on a source with
alSourceQueueBuffers()
, then after the source is started, repeatedly check the source’sAL_PROCESSED
property to see if any buffers have been completely played through. If so, retrieve them withalSourceUnqueueBuffers()
, which receives a pointer to the IDs of one or more used buffers. Refill with new data (doing this repeatedly is wherealBufferDataStatic
is going to be your big win) and queue it again on the buffer withalSourceQueueBuffers
. -
On the other hand, all you get back when you dequeue is an ID of the used buffer: you might need to provide yourself with some maps, structures, ivars, or other data to tell you how to refill that (what source you were using it on, what static buffer you were using for that AL buffer, etc.)
-
This isn’t a pull model like Audio Queues or Audio Units. You have to poll for processed buffers. I used an
NSTimer
. You can use something more difficult if you like. -
Play/pause/stop with
alSourcePlay()
,alSourcePause()
,alSourceStop()
. To make multiple sources play/pause/stop in guaranteed sync, use thev
versions of these functions that take an array of source IDs. -
You’re still an iPhone audio app, so you still have to use the Audio Session API to set a category and register an interruption handler. If you get interrupted, set the current context to
NULL
, then make a new call toalMakeContextCurrent()
if the interruption ends (e.g., the user declines an incoming call). This only works for iPhone OS 3.0; in 2.x, it’s a bag of hurt: you have to tear down and rebuild everything for interruptions.
That’s about all I’ve got for now. Hope you enjoy the article when it comes out. I’ve had fun pushing past the audio basics and into the hard parts.