Rss

What You Missed At Voices That Matter iOS, Fall 2011

I’m surprised how fast iOS conference slides go old. I re-used some AV Foundation material that was less than a year old at August’s CocoaConf, some of it already seemed crusty, not the least of which was a performSelectorOnMainThread: call on a slide where any decent 2011 coder would use dispatch_async() and a block. So it’s probably just as well that I did two completely new talks for last weekend’s Voices That Matter: iOS Developers Conference in Boston.

OK, long-time readers — ooh, do I have long-time readers? — know how these posts work: I do links to the slides on Slideshare and code examples on my Dropbox. Those are are at the bottom, so skip there if that’s what you need. Also, I’ve put URLs of the sample code under each of the “Demo” slides.

The AV Foundation talk is completely limited to capture, since my last few talks have gone so deep into the woods on editing (and I’m still unsatisfied with my mess of sample code on that topic that I put together for VTM:iPhone Seattle in the Spring… maybe someday I’ll have time for a do-over). I re-used an earlier “capture to file and playback” example, and the ZXing barcode reader as an example of setting up an AVCaptureVideoDataOutput, so the new thing in this talk was a straightforward face-finder using the new-to-iOS Core Image CIDetector. Apple’s WWDC has a more ambitious example of this API, so go check that out if you want to go deeper.

The Core Audio talk was the one I was most jazzed about, given that Audio Units are far more interesting in iOS 5 with the addition of music, generator, and a dozen effects units. That demo builds up an AUGraph that takes mic input, a file-player looping a drum track, and an AUSampler allowing for MIDI input from a physical keyboard (the Rock Band 3 keyboard, in fact) to play two-second synth sample that I cropped from one of the Soundtrack loops, all mixed by an AUMultichannelMixer and then fed through two effects (distortion and low pass filter) before going out to hardware through AURemoteIO. Oh, and with a simple detail view that lets you adjust the input levels into the mixer and to bypass the effects.

The process of setting up a .aupreset and getting that into an AUSampler at runtime is quite convoluted. There are lots of screenshots from AULab in the slides, but I might just shoot a screencast and post to YouTube. For now, combine WWDC 2011 session #411 with Technical Note TN2283 and you have as much a fighting chance as I did.

I’ll be doing these talks again at CocoaConf in Raleigh, NC on Dec. 1-2, with a few fix-ups and polishing. The face-finder has a stupid bug where it creates a new CIDetector on each callback from the camera, which is grievously wasteful. For the Core Audio AUGraph, I realized in the AU property docs that the mixer has pre-/post- peak/average meters, so it looks like it would be easy to add level meters to the UI. So those versions of the talks will be a little more polished. Hey, it was a tough crunch getting enough time away from client work to get the sample code done at all.

Speaking of preparation, the other thing notable about these talks is that I was able to do the slides for both talks entirely on the iPad, while on the road, using Keynote, OmniGraffle, and Textastic. Consumption-only device, my ass.

Slides and code

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.