What You Missed at 360iDev

At the beginning of the month, I spoke at 360iDev in San Jose. I’d wanted to do a go-for-broke Core Audio talk for a long time, sent in an ambitious proposal, and got accepted, so this was destined to be it. Now I just had to come up with something that didn’t suck.

Luckily, there was another Core Audio talk, from Robert Strojan, that did a high-level tour of audio on the iPhone, with two deep dives into OpenAL and Audio Units. So that got everyone ready.

I called my talk Core Audio: Don’t Be Afraid To Play It LOUD (from a caption in the liner notes to Matthew Sweet’s Girlfriend), and went with the no fear angle by focusing almost entirely on audio units. The idea being: you’ve seen these low-latency audio apps that do crazy stuff, you know it must be possible, so how? The answer is: you involve yourself in the processing of samples down at the audio unit level.

Click through the 170+ slides — it’s not that bad, I just expanded all the Keynote “builds” into their own slides — and you’ll get the basic grounding in the stuff you always have to do with audio units, and the Remote I/O unit in particular. Stuff like finding the component, getting an instance, setting properties on the input and output scopes to enable I/O and set an AudioStreamBasicDescription, things you will get as used to as implementing init and dealloc in an Obj-C class.

The big win is the examples, which came together in a hideous mess of two spaghetti code files that I’m embarrassed to say are now in the hands of all the attendees. One reason for the blog entry you’re reading is to present cleaned up versions of the five sample applications from the session.

In the past, I’ve done an audio unit example that produces a sine wave by cranking out samples in the callbacks. It’s sort of like the hello world of audio in that it gets you down to the nitty-gritty of samples without too much pain, but it doesn’t play to a crowd all that well. FWIW, the second chapter of the Core Audio book writes a sine wave to a file, again to get you thinking about samples as soon as possible.

But for this talk, I decided to do a set of examples that work with audio input. That way, we got to play with the mic and the speaker — bus 1 and bus 0 for you folks who already know this stuff — and get some halfway interesting audio.

The first example goes through the drudgery of creating and initializing the Remote I/O unit, and connects bus 1 output to bus 0 input to do a pass through: anything that comes in on the mic goes out the speakers. I used to do this with an AU Graph and AUGraphConnectNodeInput(), not realizing it’s easily done without a graph, by just setting a unit’s kAudioUnitProperty_MakeConnection property. With that, I could speak into the simulator and get audio out over the PA (or into my device’s mic, but I used the simulator because it shows better).

Well, yay, I’ve turned the iPhone simulator into a mediocre PA system. What’s next. The key to the good stuff is to be able to involve yourself in the processing of samples, so the next example replaces the direct connection with a render callback. This means we write a function to supply samples to a caller (the Remote IO unit, which needs something to play). In the basic version, we call AudioUnitRender() on the IO unit’s bus 1, which represents input into the device, to provide samples off the mic.

Still boring, but we’re getting there. Instead of just copying samples, example 3 performs a trivial effect by adding a gain slider, and applying that gain to every sample as it passes through.

Core Audio pass through with gain slider

Now we can apply any DSP that suits us as samples go through the render callback function. In example 4, we apply a ring modulator to the samples, which combined a 23 Hz sine wave with the input signal from the mic to create a reasonably plausible Dalek voice.

Dalek voice demo:

I was pretty much over time at this point, but the last example is too much fun to miss. To show off other units, I brought a multichannel mixer unit into play. On bus 0, it got a render callback to our existing Dalek code. For bus 1, I read an entire LPCM song into RAM (which is totally bad and would blow the RAM of earlier iPhones, but I couldn’t get the damned CARingBuffer working), and provided a render callback to supply its samples in order. The result, infamously, is “Dalek Sing-A-Long”:

Dalek sing-a-long

Dalek-sing-a-long demo:

Anyways, great conference, great speakers, great attendees. Thanks for reading, and here’s the code:

Comments (13)

  1. […] down working on my iPad app.  Now it looks like Chris Adamson has beaten me to it. Go check out his 360 iDev presentation. I hope to be back soon with […]

  2. Hi Chris,

    Thanks for the sample code. I’m running the AUPassThroughWithGainEffect on a 2G iPod Touch and the audio will only come out on a single channel/ear piece. I haven’t yet determined whether it’s being captured mono or just output that way. Quite odd. Works fine in the simulator.

    Any insights appreciated.


  3. Pretty sure mic input is mono on the device, but I specify a stereo ASBD when I set format on the units. Didn’t take the time to find an easy-to-understand place to address this… was kind of rushed. 🙂

  4. No worries. Switching ASBD to mono solves the issue. Which makes sense really, mono coming in but stereo going out, so the SDK may be interleaving for you.

    One other thing to note, memcp on every sample produces noticeable lags(buffer underflow?) on an iPod Touch(and I assume 3G, which is slower). Adjusting gain on individual short samples, does the trick.

    Rest of the code works like a charm. Thanks.

  5. Thanks for the post. It’s really helpful. Looking forward to your Core Audio book.

    I downloaded and ran the sample code. It works!
    But there is no sound when I connect my Jawbone Bluetooth headset.

    It still does not work after inserting the following code before/after “AudioSessionSetActive(true);”:

    UInt32 allowBluetoothInput = 1;
    status = AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryEnableBluetoothInput, sizeof(allowBluetoothInput), &allowBluetoothInput);
    NSAssert(status == noErr, @”AudioSessionSetProperty fails – Couldn’t enable Bluetooth”);

    Any pointer is appreciated.
    Thanks in advance for your help.

  6. jakub

    How would you make these stereo?

  7. jakub: The stream format is already set to be stereo:

    	myASBD.mChannelsPerFrame = 2;

    But if the input device is mono (like the headset is), you’ll get sound in only one channel (see comments 2, 3, and 4 above). I didn’t want to deal with this and thereby make the examples even more complex, but your options include using the Audio Session to inspect how many channels the current H/W input has and just setting mChannelsPerFrame to that, or in the versions that use render callbacks you could copy each left sample over to the right (ie, for each frame at n, and for 2-byte samples, copy 2 bytes from &n to &n+2).

  8. Hi Chris, thanks a lot for the slides and source 🙂

    I cannot for the life of me work out the render callback struct in relation to an audio graph. If you have a callback struct in place, can you add others as needed?

    The reason I ask is that I want to handle audio loops of different lengths on the same mixer (different busses) in an extension of the iPhone MixerEQGrpah dev example, but don’t want the longest loop to fill the other busses with silence, or truncate to the size of the shortest…..

    I have been trying everything I can think of (apologies if this forum is the wrong place to ask) such as different frameNum and maxNumFrame variables for different loops, different data buffers(bad idea!) and different callbacks, but nothing seems to work 🙁

    If you can give me any pointers or info I would be very grateful….


  9. ccullen: I think that when you set the render callback, you have to indicate which bus you’re setting it on. So if I follow you correctly, maybe you could have the mixer use a completely different callback (different function, different user info struct, etc.) for each bus.

    Have you tried asking on coreaudio-api to see if anyone has done something like this?

  10. That’s exactly what I hoped to do- I must be messing up my code somewhere 😉

    Thanks for the advice, and I will indeed try coreaudio-api instead of hassling you!

    Looking forward to the book……

  11. […] this particular example, because while I’d done some elaborate capture stuff on iOS (see What You Missed at 360iDev), play-through is a lot harder on Mac OS X because you literally have two different audio devices, […]

  12. jake

    Thanks for the article..
    But i have the same issue as On Lee

    I would like to pass it through to bluetooth earphones. I tried enabling bluetooth input and nothering.. both times its just quiet.

  13. Ncgaming

    How lucky am i to find your blog

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.