Rss

Archives for : midi

What You Missed At Voices That Matter iOS, Fall 2011

I’m surprised how fast iOS conference slides go old. I re-used some AV Foundation material that was less than a year old at August’s CocoaConf, some of it already seemed crusty, not the least of which was a performSelectorOnMainThread: call on a slide where any decent 2011 coder would use dispatch_async() and a block. So it’s probably just as well that I did two completely new talks for last weekend’s Voices That Matter: iOS Developers Conference in Boston.

OK, long-time readers — ooh, do I have long-time readers? — know how these posts work: I do links to the slides on Slideshare and code examples on my Dropbox. Those are are at the bottom, so skip there if that’s what you need. Also, I’ve put URLs of the sample code under each of the “Demo” slides.

The AV Foundation talk is completely limited to capture, since my last few talks have gone so deep into the woods on editing (and I’m still unsatisfied with my mess of sample code on that topic that I put together for VTM:iPhone Seattle in the Spring… maybe someday I’ll have time for a do-over). I re-used an earlier “capture to file and playback” example, and the ZXing barcode reader as an example of setting up an AVCaptureVideoDataOutput, so the new thing in this talk was a straightforward face-finder using the new-to-iOS Core Image CIDetector. Apple’s WWDC has a more ambitious example of this API, so go check that out if you want to go deeper.

The Core Audio talk was the one I was most jazzed about, given that Audio Units are far more interesting in iOS 5 with the addition of music, generator, and a dozen effects units. That demo builds up an AUGraph that takes mic input, a file-player looping a drum track, and an AUSampler allowing for MIDI input from a physical keyboard (the Rock Band 3 keyboard, in fact) to play two-second synth sample that I cropped from one of the Soundtrack loops, all mixed by an AUMultichannelMixer and then fed through two effects (distortion and low pass filter) before going out to hardware through AURemoteIO. Oh, and with a simple detail view that lets you adjust the input levels into the mixer and to bypass the effects.

The process of setting up a .aupreset and getting that into an AUSampler at runtime is quite convoluted. There are lots of screenshots from AULab in the slides, but I might just shoot a screencast and post to YouTube. For now, combine WWDC 2011 session #411 with Technical Note TN2283 and you have as much a fighting chance as I did.

I’ll be doing these talks again at CocoaConf in Raleigh, NC on Dec. 1-2, with a few fix-ups and polishing. The face-finder has a stupid bug where it creates a new CIDetector on each callback from the camera, which is grievously wasteful. For the Core Audio AUGraph, I realized in the AU property docs that the mixer has pre-/post- peak/average meters, so it looks like it would be easy to add level meters to the UI. So those versions of the talks will be a little more polished. Hey, it was a tough crunch getting enough time away from client work to get the sample code done at all.

Speaking of preparation, the other thing notable about these talks is that I was able to do the slides for both talks entirely on the iPad, while on the road, using Keynote, OmniGraffle, and Textastic. Consumption-only device, my ass.

Slides and code

Messin’ with MIDI

I hopped in on the MIDI chapter of the nearly-finished Core Audio book because what we’ve got now is a little obscure, and really needs to address the most obvious questions, like “how do I hook up my MIDI hardware and work with it in code?” I haven’t taken MIDI really seriously in the past, so this was a good chance to catch up.

To keep our focus on iOS for this blog, let’s talk about MIDI support in iOS. iOS 4.2 added CoreMIDI, which is responsible for connecting to MIDI devices via physical cables (through the dock connector) or wifi (on OSX… don’t know if it works on iOS).

Actually getting the connection to work can be touchy. Start with the Camera Connection Kit‘s USB connector. While Apple reps are typically quick to tell you that this is not a general-purpose USB adapter, it’s well-known to support USB-to-MIDI adapters, something officially blessed (with slides!) in Session 411 (“Music in iOS and Lion”) at WWDC 2011.

The catch is that the iPad supplies a tiny amount of power out the dock connector, not necessarily enough to power a given adapter. iOS MIDI keeps an updated list of known-good and known-bad adapters. Price is not a good guide here: a $60 cable from Best Buy didn’t work for me, but the $5 HDE cable works like a charm. The key really is power draw: powered USB devices shouldn’t need to draw from the iPad and will tend to work, while stand-alone cables will work if and only if they eschew pretty lights and other fancy power-draws. The other factor to consider is drivers: iOS doesn’t have them, so compatible devices need to be “USB MIDI Class”, meaning they need to follow the USB spec for driver-less MIDI devices. Again, the iOS MIDI Devices List linked above is going to help you out.

For keys, I used the Rock Band 3 keyboard, half off at Best Buy as they clear out their music game inventory (man, I need to get Wii drums cheap before they become collector’s items). This is only an input device, not an actual synthesizer, so it has only one MIDI port.

Once you’ve got device, cable, and camera connection kit, try playing your keys in GarageBand to make sure everything works.

If things are cool, let’s turn our attention to the Core MIDI API. There’s not a ton of sample code for it, but if you’ve installed Xcode enough times, you likely have Examples/CoreAudio/MIDI/SampleTools/Echo.cpp, which has a simple example of discovering connected MIDI devices. That’s where I started for my example (zip at the bottom of this blog).

You set up a MIDI session with MIDIClientCreate(), and make your app an input device with MIDIInputPortCreate(). Both of these offer callback functions that you set up with a function pointer and a user-info / context that is passed back to your function in the callbacks. You can, of course, provide an Obj-C object for this, though those of you in NDA-land working with iOS 5 and ARC will have extra work to do (the term __bridge void* should not be unfamiliar to you at this point). The first callback will let you know when devices connect, disconnect, or change, while the second delivers the MIDI packets themselves.

You can then discover the number of MIDI sources with MIDIGetNumberOfSources(), get them as MIDIEndpointRef‘s with MIDIGetSource(), and connect to them with MIDIPortConnectSource(). This connects your input port (from the previous graf) to the MIDI endpoint, meaning the callback function specified for the input port will get called with packets from the device.

MIDIPackets are tiny things. The struct only includes a time-stamp, length, and byte array of data. The semantics fall outside of CoreMIDI’s responsibilities; they’re summarized in the MIDI Messages spec. For basic channel voice messages, data is 2 or 3 bytes long. The first byte, “status”, has a high nybble with the command, and a low nybble indicating which MIDI channel (0-16) sent the event. The remaining bytes depend on the status and the length. For my example, I’m interested in the NOTE-ON message (status 0x9n, where n is the channel). For this message, the next two bytes are called “data 1” and “data 2” and represent the rest of the message. The bottom 7 bits of data 1 identify the note as a number (the high bit is always 0), while the bottom 7 bits of data 2 represent velocity, i.e., how hard the key was hit.

So, a suitable callback that only cares about NOTE-ON might look like this:


static void MyMIDIReadProc (const MIDIPacketList *pktlist,
                           void *refCon,
                           void *connRefCon)
{
   MIDIPacket *packet = (MIDIPacket *)pktlist->packet;	
   Byte midiCommand = packet->data[0] >> 4;
   // is it a note-on
   if (midiCommand == 0x09) {
      Byte note = packet->data[1] & 0x7F;
      Byte veolocity = packet->data[2] & 0x7F;
      // do stuff now...

So what do we do with the data we parse from MIDI packets? There’s nothing in Core MIDI that actually generates sounds. On OSX, we can use instrument units (kAudioUnitType_MusicDevice), which are audio units that generate synthesized sounds in response to MIDI commands. You put the units in an AUGraph and customize them as you see fit (maybe pairing them with effect units downstream), then send commands to the instrument units via the Music Device API, which provides functions like MusicDeviceMIDICommand, and takes the unit, and the status, data1 and data2 bytes from the MIDI packet, along with a timing parameter. Music Device isn’t actually in Xcode’s documentation, but there are adequate documentation comments in MusicDevice.h. On OSX, the PlaySoftMIDI example shows how to play notes in code, so it’d be straight-forward to combine this with CoreMIDI and play through from MIDI device to MIDI instrument: get the NOTE-ON events and send them to the instrument unit of your choice.

On iOS, we don’t currently have instrument units, so we need to do something else with the incoming MIDI events. What I decided to do for my example was to just call System Sounds with various iLife sound effects (which should be at the same location on everyone’s Macs, so the paths in the project are absolute). The example uses 4 of these, starting at middle C (MIDI note 60) and going up by half-steps.

To run the example, you’ll actually have to run it twice: first to put the app on your iPad, then stop, plug in your keyboard, and run again. It might just be easier to watch this demo:

[youtube=http://www.youtube.com/watch?v=gB8vfayRQP8]

Anyways, that’s a brief intro to CoreMIDI on iOS. The book will probably skew a little more OSX, simply because there’s more stuff to play with, but we’ll make sure both are covered. I’m also going to be getting into this stuff at CocoaHeads Ann Arbor on Thursday night.