Rss

Less than 19,000 words about Audio Units

This morning, I sent off first drafts of chapters 7 and 8 of the Core Audio book to our esteemed editor, Chuck Toporek. It’s the first new material he’s received in almost two months, but it’s not like we’ve been slacking off. You see, this was supposed to be just one chapter…

If you look in the table of contents, you’ll see that chapter 7 is about Audio Units. Chapter 8 is about OpenAL. Well, it was. Until chapter 7 grew and grew and grew, until it was longer than chapters 4, 5, and 6 combined. At that point, it became obvious that it was way too big to be one chapter, so we split it in two, and pushed everything after it out by one chapter.

So that’s the administrative details, but… why? Why the hell did I write a 19,000-word chapter? Suffice to say, Audio Units is big. Arguably, it’s the heart and soul of Core Audio. It’s the “engine” API (in my terminology, and to contrast it with utility APIs that do stuff like file I/O or format conversion) that the other engines (Audio Queues and OpenAL) are built on top of. It’s also the secret sauce that allows for Core Audio to offer very low-latency audio processing, a rich library of effects, and a third-party market in units to do effects, synthetic instruments, and more.

It’s also the hardest part of an already crazy-hard framework. And to my mind, that justifies really digging into it: the whole point of buying a book is to get some help with the hard parts.

Now, a bit of background as to how we got here. When I came on to the book, Kevin and Mike had fragments of three chapters, along with a few example programs for the audio queue chapters. I reused as much of their existing material as I could in the first part of the book, moving it around and working to make their voice and mine mesh. I also worked examples into the first three chapters, because I thought it was important to get readers looking at code and playing with samples and properties early. While I was writing, Kevin got three new example projects created for the units chapter: a file player, a speech synthesizer, and a sine wave (which doesn’t sound as cool, but it illustrates the concept of having Core Audio do “render callbacks” to get samples from your code, so it’s actually more useful).

With those, it was already going to be a long chapter, but I thought we were missing out by not addressing capture at the Audio Unit level, so I set about to write an example project for that. As it turns out, I was naive about this particular example, because while I’d done some elaborate capture stuff on iOS (see What You Missed at 360iDev), play-through is a lot harder on Mac OS X because you literally have two different audio devices, with different I/O cycles and different threads servicing them, so you can’t have the output unit just pull samples on demand from the input unit whenever it needs to, like you can on iOS. Instead, there are a bunch of extra steps involving discovering the available audio devices, connecting one to an AUHAL (an Audio Unit that speaks to the Hardware Abstraction Layer), and sharing input from that data asynchronously with the rest of the audio-processing graph via a ring buffer.

These chapters kind of can’t help but be long, involved exercises in “write this because we have to deal with this, write that because of that other thing.” I actually think it’s something of an improvement over Apple’s documentation, as the Apple way is to provide programming guides that aren’t complete examples (just the crucial sections), and ambitious sample code (particularly the WWDC apps) that run thousands of lines and bury the dozen or so that really matter.

Anyways, with these four examples, the first draft weighed in at 19,000 words, as compared to the 4,000 – 6,000 that we’ve been doing in our other chapters. I feared that readers would try to take it all in at once and be completely overwhelmed, and Chuck agreed that splitting in two was justified. We have some reworking to do elsewhere as a side-effect of this: the table of contents and introductory road map have to change, we’ll probably change some chapter titles to match how we’re presenting the audio units stuff, etc.

But at the end of the day, you’re getting four kick-ass audio unit walkthroughs. Plus a fifth when we get to the iOS chapter. And we still have creating your own units coming in the last chapter.

Nice to have this part done… I think it is going to be the hardest stuff in the book, meaning it starts to be a downhill ride for me from here.

Now if you’ll excuse me, I need to take a week or two away from Core Audio and switch to AV Foundation to get my talk ready for Voices That Matter: iPhone Developer Conference in Philadelphia in two weeks.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.