Archives for : June2009

An iPhone OpenAL brain dump

I’ve done something like this before, when I completed parts 1 and 2 of the audio series. I just sent off the first draft of part 3, and I’ve got OpenAL on the brain.

  • Docs on the OpenAL site. Go get the programmer’s guide and spec (both links are PDF).

  • Basics: create a device with alcOpenDevice(NULL); (iPhone has only one AL device, so you don’t bother providing a device name), then create a context with alcCreateContext(alDevice, 0);, and make it current with alcMakeContextCurrent (alContext);.

  • Creating a context implicitly creates a “listener”. You create “sources” and “buffers” yourself. Sources are the things your listener hears, buffers provide data to 0 or more sources.

  • Nearly all AL calls set an error flag, which you collect (and clear) with alGetError(). Do so. I just used a convenience method to collect the error, compare it to AL_NO_ERROR and throw an NSException if not equal.

  • That sample AL code you found to load a file and play it with AL? Does it use loadWAVFile or alutLoadWAVFile()? Too bad; the function is deprecated, and ALUT doesn’t even exist on the iPhone. If you’re loading data from a file, use Audio File Services to load the data into memory (an NSMutableData / CFMutableDataRef might be a good way to do it). You’ll also want to get the kAudioFilePropertyDataFormat property from the audio file, to help you provide the audio format to OpenAL.

  • Generate buffers and sources with alGenBuffers() and alGenSources(), which are generally happier if you send them an array to populate with ids of created buffers/sources.

  • Most of the interesting stuff you do with sources, buffers, and the listener is done by setting properties. The programmer’s guide has cursory lists of valid properties for each. The getter/setter methods have a consistent naming scheme:

    1. al
    2. Get for getters, nothing for setters. Yes, comically, this is the opposite of Cocoa’s getter/setter naming convention.
    3. Buffer, Source, or Listener: the kind of AL object you’re working with
    4. 3 for setters that set 3 values (typically an X/Y/Z position, velocity, etc.), nothing for single-value or vector calls
    5. i for int (technically ALint) properties, f for ALFloats
    6. v (“vector”) if getting/setting multiple values by passing a pointer, nothing if getting/setting only one value. Never have both 3 and v.

    Examples: alSourcei() to set a single int property, alSource3i() to set three ints, alGetFloatv() to get an array of floats (as an ALFloat*).

  • Most simple examples attach a single buffer to a source, by setting the AL_BUFFER property on a source, with the buffer id as the value. This is fine for the simple stuff. But you might outgrow it.

  • 3D sounds must be mono. Place them within the context by setting the AL_POSITION property. Units are arbitrary – they could be millimeters, miles, or something in between. What matters is the source property AL_REFERENCE_DISTANCE, which defines the distance that a sound travels before its volume diminishes by one half. Obviously, for games, you’ll also care about sources’ AL_VELOCITY, AL_DIRECTION, and possibly some of the more esoteric properties, like the sound “cone”.

  • Typical AL code puts samples into a buffer with alBufferData. This copies the data over to AL, so you can free your data pointer once you’re done. This is no big deal for simple examples that only ever load one buffer of data. If you stream (like I did), it’s a lot of unnecessary and expensive memcopying. Eliminate with Apple’s standard extension alBufferDataStatic, which eliminates the copy and makes AL read data from your pointer. Apple talks up this approach a lot, but it’s not obvious how to compile it into your code: they gave me the answer on the coreaudio-api list.

  • To make an AL source play arbitrary data forever (e.g., a radio in a virtual world that plays a net radio station), you use a streaming API. You queue up multiple buffers on a source with alSourceQueueBuffers(), then after the source is started, repeatedly check the source’s AL_PROCESSED property to see if any buffers have been completely played through. If so, retrieve them with alSourceUnqueueBuffers(), which receives a pointer to the IDs of one or more used buffers. Refill with new data (doing this repeatedly is where alBufferDataStatic is going to be your big win) and queue it again on the buffer with alSourceQueueBuffers.

  • On the other hand, all you get back when you dequeue is an ID of the used buffer: you might need to provide yourself with some maps, structures, ivars, or other data to tell you how to refill that (what source you were using it on, what static buffer you were using for that AL buffer, etc.)

  • This isn’t a pull model like Audio Queues or Audio Units. You have to poll for processed buffers. I used an NSTimer. You can use something more difficult if you like.

  • Play/pause/stop with alSourcePlay(), alSourcePause(), alSourceStop(). To make multiple sources play/pause/stop in guaranteed sync, use the v versions of these functions that take an array of source IDs.

  • You’re still an iPhone audio app, so you still have to use the Audio Session API to set a category and register an interruption handler. If you get interrupted, set the current context to NULL, then make a new call to alMakeContextCurrent() if the interruption ends (e.g., the user declines an incoming call). This only works for iPhone OS 3.0; in 2.x, it’s a bag of hurt: you have to tear down and rebuild everything for interruptions.

That’s about all I’ve got for now. Hope you enjoy the article when it comes out. I’ve had fun pushing past the audio basics and into the hard parts.

iPhone 3GS vs. the World

First iPhone 3GS nit: refuses to charge when connected to the USB 2.0 port of the Bella USA Final Cut Keyboard:
Screenshot 2009.06.19 13.23.07

Can’t wait to see if it balks at connecting to the “Built for iPod / Works with iPhone” car radio I bought four months ago.

On the other side of the crunch

The WWDC keynote announcement that iPhone OS 3.0 would be released in a little over a week caught us a bit by surprise: the next edition of our iPhone SDK Programming book was nearly ready to go, but we’d waited until WWDC to resolve some blockers. Now we had a week to get the new version ready for the public release of 3.0 and the end of the NDA for that version.

The biggest blocker for me had to do with the Bluetooth peer-to-peer features in the new Game Kit framework. The problem is with device support: first-gen iPod touches don’t have Bluetooth, and the first-gen iPhone (which I have) has an older Bluetooth chipset that Game Kit doesn’t support. So back in April, I bought a second-gen iPod touch, largely for writing this chapter.

Unfortunately, it turns out that the iPhone Simulator doesn’t support Game Kit’s Bluetooth networking, even on Macs with Bluetooth. So, to develop and test a P2P game, you need two recent iPhone OS devices.

I could have waited until Friday, when I’ll be buying an iPhone 3GS (which surely will have Game Kit-capable Bluetooth), but to get the chapter out for the new version of the book, I wrote blind code on Tuesday and Wednesday, and spent Thursday morning in the iPhone Lab with Apple’s test devices, and the Game Kit engineers handy to answer my questions.

After a couple hours, I had P2PTapWar running on the two devices. This is an asinine little game that lets two players find out who can tap their screen the fastest.


I’m glad we got this chapter into the book, the latest beta of which is available today. It went well enough, in fact, that a section of the Game Kit chapter is one of the new free excerpts available on the book’s page.

Now to finish up our remaining issues with this book and get it to the printer.

Mac OS X 10.6 Intel-Only: Told Ya

I’ve brought this up before, but now that it’s official, I just want this noted for the record:

In October 2005, I said that it might make sense to wait for the then-announced Intel transition, rather than buy PowerPC hardware. Check out the comments — the Mac zealots were seriously pissed at me for suggesting that buying into PowerPC was a dead-end. It was called “ridiculous advice”, “dumb advice”, and “flat wrong”.

In part, my blog was based on my estimate of when Apple would ditch PPC support:

So, I suspect that Leopard is the end of the line for PowerPC, and that 10.6 will be Intel-only. That means you are buying into a four-year dead-end on PowerPC.

This was based on a rough calculation that 10.6 would come out in 2010. Yet we know from yesterday’s keynote that it will actually arrive in September, 2009. And we also know from yesterday’s keynote, confirmed on the Snow Leopard tech specs page, that it is Intel-only.

Yeah, you’re welcome.

WWDC 2009 prep

I’m leaving for WWDC on Sunday afternoon, getting in Sunday night which means I get my pass Monday morning and therefore can’t be in line early enough to get in the main room and will instead be in the overflow. Don’t mind actually; my time is worth something, and waiting more than six hours in line at Moscone West is just not worth it. For me, the appeal of WWDC is the knowledge, not the spectacle.

I’m not in the mood for predictions (I didn’t do well last year, predicting the deprecation of Carbon), though there are little things that are obvious: we’ll get major new beta builds of both iPhone OS 3.0 and Snow Leopard, hopefully at least one of which will be provided on DVD so we don’t paralyze the wifi with a thousand simultaneous downloads. Last year, working through an iPhone OS build that had botched app-signing, I spent four hours Tuesday morning with a group of developers trying to download a fixed build, and when I was the first to get it, I burned a DVD and shared via Bonjour so the rest of the group could get it.

I don’t think there will be an iPhone OS-based tablet for two reasons. First: tablets, thusfar, have sucked, and the case for them hasn’t been made. Secondly, a tablet would have different dimensions than an iPhone, and for third party apps to run on it, I would expect the iPhone SDK to be pushing us towards resolution independence. But instead, the opposite seems to be happening: we were told two years ago to start getting Leopard apps ready for resolution-independence, and that at some point Apple would “flip the switch” and make Leopard resolution-independent in the field. Yet I don’t think that ever happened, and the iPhone SDK continues to dictate explicit pixel sizes for things like icons and badges, just the opposite of the guidance I’d expect if we were being set up for running under varying screen dimensions.

Similarly, the SDK makes an argument against the rumor of video recording with editing. I originally thought the idea of QTKit as an API was a migration strategy on the Mac to get developers off the legacy QuickTime code (a 20-year-old code base!) and on to something modern: while QTKit largely calls into the old QuickTime code, if the underlying implementation changed to a new code base, developers would be none the wiser, and you could retire the old code (this might be what QuickTime X is doing, but Apple continues to call it “the streamlined path for efficient playback of modern standards-based media”, and playback represents only a tiny fraction of QuickTime/QTKit functionality). Now, what if QTKit weren’t just hiding a migration of underlying code, but migration of platforms: if you were going to support video editing on the iPhone, it would make sense to at least try to reuse the work that’s already been done on the desktop (as they’ve done with Core Audio, for example). But arguing against this hypothesis is the existence of the iPhone’s AVFramework, a high-level framework for the simplest of audio tasks, like playing a file. It doesn’t exist on the desktop, because QTKit makes it unnecessary: to play an MP3, just open it as a QTMovie and call play. So the fact that there still is an iPhone-only AVFramework makes me think that QTKit is not coming to the iPhone anytime soon, which in turn argues against video editing.

Of course, I’ve been wrong before. Like, last year.

As for WWDC wishes? I’d actually like to see AppleTV get some love. With the explosion of Flash-based video streaming sites, AppleTV is starting to seem irrelevant when it can’t play video from Hulu, Crunchyroll, or any of the network sites. For AppleTV to get relevant again, it would presumably have to either get Safari and Flash into the box (and thereby cave on the consistent user experience), or Apple would have to make deals with the major content providers for them to make their stuff available to AppleTV (similar to the way the iPhone offers a non-Flash YouTube). But how many providers would they have to work with? And would this cut too deeply into iTunes if the same box offers you a streaming show with commercials that you’d otherwise have to pay for? (counter-argument: doesn’t seem to hurt iTunes on the desktop.) At some point, I’d like to move into something like an AppleTV, but the streaming anime I want to watch is on a bunch of websites that it doesn’t support; more likely that I’d use the Mac Mini as a super-AppleTV, even though its output options don’t agree with the analog-only inputs on my HDTV.

A small thing I’m hoping for is a genuine D-pad accessory for the iPhone. Touch games are great, but we can see from the sales of titles like Ms Pac Man and Galaga that old-school joystick games are still desirable. The touch screen is awful for these, because you don’t have a tactile sense of whether your thumb is on the correct button. I’m hoping for a control that wraps around the iPhone, putting D-pad on one side, and a set of buttons on the other, making it a de-facto PSP (hopefully, software would support flipping for lefties, like the old Atari Lynx did). The key for this would be that the hardware manufacturer would set a standard for developers to use the External Accessory API to determine if the D-pad is present and poll the button status. If we had 10 different D-pad accessories that worked differently, there’d be no software support. We really need one strong hardware maker (Pelican, Belkin, whoever) to set a standard and open it up to the developer community. Not counting on this happening, but there’s no way we’re getting Street Fighter or Soul Calibur on the iPhone until it does.

One more wish: I hope putting out the session videos doesn’t take four months again. That’s too long to wait for needed information from sessions you missed due to schedule conflicts. Hell, would it be so wrong to just get the slides out right away?

U, G, L, Y, you ain’t got no alibi

This may be the worst print ad I’ve seen in years:

WTF? All fear Denny Crane, flaming head of the apocalypse?

I showed it to my wife who assumed, since it was in Play, that it was an ad for a video game. Actually, it’s for a comic book, but if you can’t even tell what an ad is for with close examination, that’s a pretty obvious fail.

Happy ‘Cause I’m Staying Home

Any of the last five years, I’d be out in San Francisco at this point for JavaOne. It came with the job of editing and ONJava, but truth be told, I loathed the conference.

When I left Java, I figured I would likely use this week to rail against JavaOne’s indulgences and excesses: the idea that there are Sun developers who work months on demos for the conference (anything that takes that long should be a shipping product), the over-emphasis on the one-big-splash conference that gets a little tech press instead of outreach efforts like Sun Tech Days that get out to developers in the field, and of course, the vaporous announcements. The whiny little bitch contingent of developer-dom won’t let go of Steve Jobs’ vow to make the Mac the best platform for Java (this was before Java off the server fell into utter irrelevance), but overlook all the other JavaOne keynote highlights that either never came out (Java for the PS2, Java for the Infinium Phantom console, Visual Basic as a JVM language) or quietly faded into the Where Are They Now folder (the Looking Glass desktop, the “Wonderland” knock-off of Second Life, etc.). The other night, I wondered whatever happened to the Neil Young Blu-Ray project that was hyped at last year’s JavaOne, and it turns out it comes out tomorrow (coincident with the JavaOne keynote… though it might seem like it was held for the keynote, it seems to have been delayed by a rushed CD about the financial crisis released earlier this year).

If asked last year what I thought would most help Java, I would have said that ending JavaOne (and throwing the resources into Sun Tech Days instead) would be a great start.

Thing is, with Oracle’s purchase of Sun, that kind of argument has become moot. A lot of people are openly asking if this is the last JavaOne, not because of any revelations about the uselessness and self-indulgence of the conference, but because some expect Oracle to ruthlessly dismantle and assimilate Sun, as it has with previous acquisitions (is there any trace of BEA left in the world?).

Still, there’s much to learn from a conference too big and too unfocused. I once debated JavaOne with the Java Posse‘s Dick Wall, saying that the keynotes were a train wreck, and the tech sessions either corporate or just tedious. He countered that the best part of the conference was not the formal content, but the assembly of people, and the hallway conversations. I thought that was an extraordinary concession. In fact, today, I’ll go a step farther and say that if hallway conversations are the best part of your conference, then your conference sucks. In fact, I’ll make that the third of my laws.

Does this argue against the all-talk format of unconferences and bar camps? No, because there the conversations are the conference, though I find that format is better suited to sharing opinions than technical knowledge (which may be why unconferences are so well-suited to Java and all its politics and drama). I tweeted this latter opinion to Kathy Sierra, who acked back with a suggestion that “I think it might be time now for less *camp & more *jam… (people get together to create/do.” Daniel Steinberg made the same point, pining for a return to the get-together-and-code format of the late, great MacHack.

I agree with Kathy and Daniel, but I’ll note that it seems to work only if the coding is the point of the conference. Year after year, attempts were made to add on-site coding contests to JavaOne (robotics in our booth, slot cars over in real-time Java), and nobody took the bait. It works as a one-day precursor to a conference, or as the conference itself, but shoehorning code jams into traditional conferences doesn’t seem to work.

So, I’m not going to miss JavaOne if this is indeed the end, but I hope that something better replaces it. Just trying to fold it into a larger omnibus Oracle event isn’t going to do anything for anybody… unless the community goes the DIY route and launches its own conferences, with different formats, smaller focuses, and locations out where the developers are, rather than summoning everyone to the gloomy catacombs of Moscone North and South. Guess we can hope.