Rss

Archives for : idevblogaday

Apple TV prediction party

WWDC is just two weeks away, and this is my last iDevBlogADay spot before then (actually, yesterday was my slot, but I got thrown off by the holiday). Everyone else is going to be chiming in with predictions until then — put me down for something specific, like “Core Audio changes the canonical data types to floating-point, since ARM7 is perfectly capable of doing float” — and I don’t have much to say that I haven’t said before.

A lot of the talk is about the possibility of an Apple TV set, or an Apple TV SDK. I talked a bunch about this in my Anime Central-inspired Mac/iOS media post, but to summarize…

AirPlay is Apple’s secret weapon, to a degree that has not fully been appreciated by many. I mean that literally; Time Warner Cable’s CEO admitted to Engadget that he doesn’t know what it is. But by turning every iPhone/iPad/iPod-touch into a de facto cable box, powered by hundreds of video apps, there’s a huge potential for distrupting the existing industry. At this point, Crunchyroll is surely my favorite iOS app of all, given that it has effectively become my very own personal anime TV channel (four words, folks: Puella Magi Madoka Magica). Multiply this by a hundred niches and content providers (including 3 of the 4 big team sports in the US) and you’ve got a tsunami.

The trick is that Apple didn’t scare the incumbents with a frontal attack — they’ve let content providers slowly build up the streaming content collection. All that’s needed now is to remove the AirPlay link and run directly on the box via a Apple TV SDK. And if you’ve ever plugged an Apple TV into Xcode via the micro-USB (to test betas, as I did last year while working on AirPlay support for a client’s app), you know that Xcode recognizes the Apple TV as an iOS device and even offers a (non-functional) “enable for development” button. This is something that they could enable at a time and place of their choosing, and maybe that’ll be in two weeks.

That said, developers might not have an accurate view of what Apple TV development would be. Someone at A2-CocoaHeads said he wanted an Apple TV SDK so that he could write a game where an iPhone or iPod touch served as the game control for a TV-based game. Of course, this is possible now: the Apple TV shows up in [UIScreen screens], so you run the game logic on the handheld device and just draw graphics to the second screen.

And who knows what kinds of apps will be welcome or permitted? It would be uncharacteristic of Apple to require or even tolerate substantial keyboard-based entry on Apple TV apps — Google TV shipped a keyboard-based remote control, and how did that work out? Someone’s going to say we need streaming video apps with integrated chat, but if you really have to do that, again, you could do that today by running the app and hosting the chat interface on the iPad and streaming the video to the Apple TV.

If there is an Apple TV SDK, it should neither surprise nor disappoint anyone if the only apps that Apple accepts for it are streaming media. It’s called focus, people, something that distinguishes Apple.

Directions and Disintermediation

Apologies for a double-post to iDevBlogADay, but I didn’t want to hit the wider audience with two straight anime/manga-related iOS blogs, and I was way behind on entries for the first few months of the year anyways.

With WWDC and presumably iOS 6 approaching, John Gruber looks for obvious gaps in iOS to fill, and in Low-Hanging Fruit, he doesn’t report many. Following on reports that Apple will switch from Google to an in-house Map provider, he writes that while controlling such an essential functionality is crucial, the switch has to be flawless:

This is a high-pressure switch for Apple. Regressions will not be acceptable. The purported whiz-bang 3D view stuff might be great, but users are going to have pitchforks and torches in hand if practical stuff like driving and walking directions are less accurate than they were with Google’s data. Keep in mind too, that Android phones ship with turn-by-turn navigation.

This was an interesting graf to me, because Gruber usually gets the details right, and this time he’s pretty far off.

Start with “users are going to have pitchforks and torches in hand if practical stuff like driving and walking directions are less accurate than they were with Google’s data.” That’s a fundamental misunderstanding of what the iOS SDK provides. Neither Map Kit, nor any other part of the SDK, provides directions. Map Kit only provides map tile images, and a “reverse geocoder” that may be able to correlate a given location to a street address or at least a political region (city, state/province, country). There’s nothing in Map Kit that knows that a yellow line is a road, a blue polygon is a lake, a dashed line is a state or country border, etc. More details in my write-up of writing Road Tip and my Bringing Your Own Maps talk from 2010, but the long-and-short is that any app that provides directions has to be getting its data from some source other than Map Kit, probably a web service hosted by Google, MapQuest, Bing, etc.

I also did a little research to see what the Android API for this stuff looks like, assuming it was easier, and was quite surprised to see that it’s not. A DrivingDirections class was apparently removed after Android 1.0, presumably due to the fact that implementing it required third-party data (TeleAtlas, NAVTEQ) that Google wasn’t in a position to redistribute for free. The suggested workarounds I’ve seen are all to either license a third-party directions API (embedded or webservice), or to make a specific request to maps.google.com and scrape the result as JSON… in other words, to use Google as the directions webservice and not worry about the terms of service. I only spent about 15 minutes looking, but it didn’t appear to me that getting directions is any easier for Android developers than it is on iOS. So while Gruber notes that “Android phones ship with turn-by-turn navigation”, that seems to be a user feature, not a developer feature.

So, on the one hand, third-party apps probably won’t change right away, because they haven’t counted on iOS for their directions anyways. But maybe that’s an opportunity: if Apple controls its own data, maybe it could offer an easy-to-use directions API, consisting of easy Objective-C calls rather than JSON or XML parsing like webservice clients have to do now.

There might be a licensing snag too: using Map Kit today means that developers are implicitly accepting the Google Maps terms of service. If iOS 6 switches providers, the terms presumably change as well, and I wonder how that would be handled. Would creating a MKMapView magically default to the new Apple maps in iOS 6 (and thereby inherit new Apple terms of service), or would developers maybe have to resubmit or add a credential to their apps to get the new maps?

And taking maps in-house has other interesting side-effects. The other day when I was out with my kids, I searched for “Arby’s” in Jackson, MI and the result appeared on the map as a “Sponsored Link” (I can’t re-create it here at home… maybe it’s only an ad when you’re physically nearby?). The money for that ad presumably goes to Google, a revenue stream that will start to dry up if Apple provides its own business location data when searching maps. We’ve also heard that Siri could hurt Google search if people start performing speech-based searches rather than keying terms into google.com via the mobile browser.

And isn’t this exactly what Google was afraid of, and why they felt the need to create Android? With iOS, Apple is in a position to disintermediate Google. And with Apple’s wrath towards Android, the company may be more interested and willing to do so than it might otherwise have been. Smooth move, Andy.

Apple Should Get Out of the Manga Piracy Business

Sorry for another anime/manga-related post, but a thread on Twitter reminded me of some Apple misdeeds that need rectifying. It started with a pair of tweets, first from Zac Bertschy of Anime News Network:

I’m sure this has been asked a million times, but why are there so many goddamn bootleg manga apps on the iOS store?

And then a follow-up from social-media expert and publisher Erica Friedman of Yuricon:

@ANNZac I’ve tried to write Apple/Google about the links to bootleg sites. Neither has a reasonable way for reasonable people to complain

So let me back up a second… what we’re talking about are dedicated apps that read “scanlations”, which are comics (usually Japanese manga) that have been scanned, translated by fans into English, and posted for free to various websites or made available through channels like BitTorrent.

Zac righly calls this “bootlegging” because there is no question that copyright violation is involved. Entire works are being digitally redistributed with zero compensation to the original authors or publishers. What can make this a gray area is a question of whether or not any actual harm is done: if the work is unavailable in English, nor likely to ever be, then how can a scanlation eliminate a sale that could never be made? This is a fairly bogus defense because (as we’ll see), the untranslated works are just a minor part of the story. Moreover, we could apply the established tests of “Fair Use” under US copyright law, such as:

  • Is the new work “transformative”? In other words, are we using the original to create a fundamentally different thing?
  • How much of the original is being used?
  • Does the copying impede future sale of the original work? Does it harm the creator?
  • etc.

Guidelines like this permit use like, say, presenting few pages of a comic in the context of a critical review or an academic paper: fundamentally new work, small amount of copying, doesn’t replace the original (and might actually drive new interest and sales). And obviously, a scanlation fails every one of these tests: it’s a full-on copy that changes only the language, and fully replaces a translation the original publisher might provide. It’s also been pointed out that scanlations are harming the development of a legal digital manga industry in the US. Scanlations would have zero chance of surviving a legal challenge.

So why the hell is Apple in the business of distributing them on iOS?

Search for manga on the App Store and you’ll get dozens of hits. Most of them are apps for downloading and reading scanlations on your iPad or iPhone. For the purpose of this blog, I tried the free versions of:

Note: these are not affiliate links. I wouldn’t want a cut of their sales, since I consider them illegal and illicit.

Most of these apps get their contents from three scanlation websites: MangaFox, Mangareader.net, and MangaEden. Some of these sites play at supporting the source of their titles by slapping in pseudo-legal disclaimers and vague admonishments to somehow support artists as seen on this page of The Rose of Versailles:

Manga Storm page from Rose of Versailles, with disclaimer caption

This image is hosted at mangafox.com. We take no credit for the creation or editing of this image. All rights belong tot he original publisher and mangaka. While we hosted this for free at mangafox.com, please don’t forget to support the mangaka in any way that you can once his/her work becomes available for retail sale in your region!

Some of these sites also adhere to an ethic that they don’t host scanlations of titles that have been licensed in the US. In this screenshot, Manga BDR (which awkwardly makes you browse MangaFox rather than scraping its index) shows an notice that Fullmetal Alchemist is unavailable from MangaFox because it has been licensed in the US:

Manga BDR showing MangaFox notice that Fullmetal Alchemist is unavailable

Does this mean there’s honor among thieves? Hardly. The sites are still violating the original Japanese copyright of the titles they do offer. And they’re not living up to the implicit promise to make obscure titles available to a wider audience — the Rose of Versailles manga cited above has not been completely translated, despite being more than 30 years old. And wherever Manga Rock gets its data from, it has no compunction about offering up titles that have US publishers. Here’s Manga Rock 2 offering Fullmetal Alchemist in its entirety:

Fullmetal Alchemist manga on Manga Rock 2

Not only is this stuff illicit bootlegs, these apps are popular because they allow access to pirated manga. Every single one of these apps advertises itself on the App Store with screenshots of browsing popular titles that have US publishers: Manga Storm shows Fairy Tail, Manga Rock shows Fairy Tail, Air Gear, and Negima!, and Komik Connect shows Bleach and Naruto. And the users use these apps precisely because of their illegal nature: the one-star reviews on Manga Storm don’t complain about it ripping off artists, but because it lacks US-licensed titles (due to its dependence on MangaFox and friends), and because it’s a paid app.

And speaking of the paid versions…

Apple gets a 30% cut of every sale of the full versions of these apps. That makes Apple a direct beneficiary of copyright piracy.

Everyone who stood up to say Apple does more to support creators than Google and its cavalier attitude towards IP rights, you can sit down now. So long as these apps are available on the App Store, Apple is complicit in piracy.

It’s fair game to criticize Apple for these, when the company has such a stringent review process. When it’s so careful to consider what it will and won’t sell, approval of an app has to be considered an explicit endorsement, particularly considering Apple gets a cut of the sales.

And that’s what makes it all the more galling:

The last of these may be the most galling. Erica Friedman again:

I went on a rant about why is it okay with the those of you who like shiny things that Apple just told DMP to take their BL off the iPad app? WHY?!? If the TV hardware manufacturers told you what TV stations you could receive, you’d be enraged. When your work blocks sites, you find ways around it. So why the hell is it okay will all you Apple fans that Apple censors content? I cannot understand why you are not screaming at all, much less loudly? APPLE CENSORS CONTENT. Especially LGBTQ content. Why are you still giving money to a company like that? People boycott BP and Chik-Fil-A and Target…but are absolute sheep about Apple’s censorship of content. ARGGGGGHHHH.

It’s as if Apple is saying “we won’t let anyone sell you gay manga for your iPad, but we will sell you tools to help you steal the stuff.”

This has to stop.

If nothing else, these apps are in obvious violation of section 22.4 of the iOS App Store Review Guidelines:

22.4 Apps that enable illegal file sharing will be rejected

Apple apparently won’t listen to third-party criticism (people have been calling attention to these bootlegging apps since at least 2010: 1, 2, 3), but there are channels that aggrieved parties can use. Viz and Yen Press have legitimate iOS apps for their manga titles. Since Manga Rock 2 makes bootlegs of those titles available (I saw Viz’s Fullmetal Alchemist and Yen’s High School of the Dead), these companies could use Apple’s dispute policies to at least have Manga Rock 2 taken down.

Beyond this, it’s hard to see what will work. Via Twitter, Erica noted yesterday that most US manga publishers are too small and operating on margins too thin to follow up with DMCA takedowns, and Apple may be technically in the clear on DMCA because they’re not themselves hosting the offending content.

However, since Apple’s making money off the sale of the apps used to pirate this content — in clear and obvious violation of their own policy — another option is that the Japanese publishers might want to sue Apple directly. They would presumably have more legal resources to stick with a lawsuit, and with Apple deaf to criticism, maybe it would take a few subpoenas to call their attention to the fact that making money off piracy is an awfully dirty business for one of the world’s largest and most prestigious companies to be involved in.

For the sake of Apple and the creative community, these apps need to disappear forever.

What Anime Central Taught Me About Mac/iOS Media

I spent the weekend in Chicago at Anime Central 2012, the first anime convention I’ve been to since Anime Weekend Atlanta 2007. Many of us geeks have our escapes: mine is the mythical realm of “Japan”, where children are taught the value of empathy, and people default to a position of kindness and respect in their dealings with others. ぼく の ゆめ きれい です。

invalidname's ACen Badge

Along with loading up on collectibles (Angel Beats! Anohana!) and finding new shows to watch (Toradora!), I had a few encounters that, surprisingly, led to some insights about where digital media is and is going on the platforms I work with.


That iPad UStream Guy

In the Sentai Filmworks panel, I noticed a guy in the front row shooting video with his iPad. Not a bad idea — the battery life is enormous, so you might only be limited by using up the flash storage. But what I discovered later was that he was the animecon-industry channel on UStream, and was streaming all these panels live on the internet. From the iPad.

Multiply this by a bunch more iPads and a bunch more interesting pursuits and I start to wonder why I’m not already combing UStream for cool stuff streaming now.

BTW, I asked three questions in the Sentai Panel. Watch the recorded version and you can hear me asking about whether iTunes download-to-own makes still sense for them, why they went back and licensed the older ef: A Tale of _____ shows, and what led to the reissue of Clannad After Story with an English dub.


Making Music with Miku

Hatsune Miku Vocaloid 2 softwareAnother panel I went to was a how-to showing how Yamaha’s “Vocaloid” software can be used to create synthetic singers, such as the very popular (and much-cosplayed) Hatsune Miku. What I didn’t expect to find out is that the Yamaha guy who created the Vocaloid software is a big Mac fan, and despite the fact that the Vocaloid products are Windows-only, was shown in a video from last year’s Anime Expo toting a MacBook Air to a panel (it may be this one, but if not, it still shows Ito-san clearly using an Air).

Duly inspired by this, the panelists at ACen showed a MacBook-based workflow that used Vocaloid 3, despite the fact the program is Windows-only, and was only partially localized for English. They played a synth into Logic Pro to lay down a base music track, then played a second track as the vocal line. They exported that track as a .mid (MIDI sequence) in a directory shared with VMWare, where it could then be imported into Vocaloid and sung by Miku. After tweaking the Japanese phonetic lyrics, they exported a .wav back to Logic to complete the song.

Of course, wouldn’t it be nicer to cut Windows out entirely? Knowing that there are Mac fans at Yamaha and Crypton, maybe Apple should make some calls. Having Hatsune Miku as an instrument in GarageBand or Logic would be a hell of a lot of fun, and would surely lead to getting her as the vocalist of even more songs on iTunes (she already had more than 1,500 last time I checked).

I’ve also mentioned on several occasions that the AUSampler audio unit in Lion and iOS 5 is doing a pitch shift more or less equivalent to what Vocaloid does, with the key difference that AUSampler can be played live, while Vocaloid uses a render step and could look ahead to upcoming notes to produce more realistic output. I’ve meant to try hacking up a “Hatsune Mac-ku” with AUSampler one of these days… it’s on the list of experimental projects that could turn into an article/blog/session if I find the time to get it working.


There’s An App For That Anime

Kids on the Slope page on Crunchyroll iPad appOne of the first things I did on the show floor was to finally sign up as a paying subscriber for Crunchyroll, the streaming service that offers many anime shows within hours of their Japanese airdate. Considering I’m currently using it to keep up with Kids on the Slope, Bodacious Space Pirates, and Puella Magi Madoka Magica, I’ve come to feel like quite the freeloader.

Part of the reason I’m watching so much Crunchyroll is that it’s easy to do so on the iPad while waiting in the hallway for my kids to fall asleep. Like a lot of Flash-based websites, Crunchyroll has had the sense to build an iPad app. And one of the upsides of having a real app is that the video can be sent over to an Apple TV for proper wide-screen viewing.

Watching Bodacious Space Pirates on Apple TV with AirPlay
When I look at the “Anime” folder on my iPad — consisting of Crunchyroll, Anime Network, Funimation Free, and Adult Swim — and then look at the icons on the Apple TV screen, I can’t help but wish that these streaming apps could be on the Apple TV itself. With the latest Apple TV update presenting us with a grid of app-like icons for the various services and providers, it sure feels like this is what we’re heading towards. And when you can just subscribe to your favorite sports league as an app and not deal with cable/satellite/local-broadcast hassles and blackouts, it starts to show the promise the cord-cutters have been talking about all this time.

The New York Times had an interesting article on this the other day. After presenting the channel-as-app metaphor, they point out that this could bring about the “a la carte” model that so many viewers have wanted for so long: the ability to just buy the content that you want, and not have to buy a bunch of programming you don’t want. But they also identify the catch that I’ve worried about for years: many of the content providers are vertically integrated with distributors. All the NBC/Universal networks are owned by Kabletown Comcast, which means that they may not want to sell content directly to end-users when that cuts into the parent’s core business of selling cable subscriptions.

And would a la carte make sense for consumers? We might get sticker shock. Think about my anime apps: I’m in for $7/month with Crunchyroll. But they don’t have everything. If I want Lupin the 3rd commercial-free on Funimation streaming, that’s $8/mo, and ef on Anime Network is going to set me back another $7/mo. So just for anime, I’m in for 22 bucks a month! Do I have anything left for sports or news? Suddenly, the DirecTV bundle isn’t looking so bad anymore. [Indeed, Anime Network is also on DirecTV VOD, which is why I’d be highly unlikely to pay for it again as a streaming subscription. Last year, instead of cord-cutting, we doubled-down on DirecTV and upgraded to their “whole home” service. We’ll probably eventually want to get the nomad too. YMMV.]

Anime folder on my iPad
In fact, when I saw the much linked Oatmeal cartoon about I Tried to Watch “Game of Thrones” and This Is What Happened, I had a specific thought when I got to the frame where the protagonist is flashing his credit card in front of the computer, saying he was ready to buy if only they would sell it to him. Here’s my thinking: HBO knows full well that there are people who subscribe to the channel entirely for the sake of one show. That is their business model: you buy it for Game of Thrones or The Sopranos, and as a bonus, you get to see Splash and Independence Day 400 times a month. Which is total crap of course, since we only care about Game of Thrones. So if you’re paying just for that show, what’s the cost? Say it’s $30/month times however long a season runs, plus time to unsubscribe, leaving a little room for customers who don’t bother unsubscribing religiously. Shall we say four months? Then that means a season of Game of Thrones is arguably worth $120 per subscriber.

Now imagine if HBO put out a Blu-Ray set, day and date with the series, at that price point. Everyone would scream bloody murder. But, Mr. Oatmeal, you were flashing your credit card! Did you think that a new production should cost the same as a back-catalog show from 20 years ago that has already paid its bills several times over?

But I’m kind of digressing into old arguments. The point to make about networks-as-apps is that Apple’s treating Apple TV as a “hobby”, the lack of an SDK for Apple TV, the company’s slow movement towards backing it up with content other than the not-terribly-popular iTunes download-to-own and partners like Netflix… it could be that they’re not going to take on the entire cable/satellite industry until they’re confident there’s a real opportunity, if not an absolute certainty, that they’ll win. What I see is a long game, where they roll out technologies like HTTP Live Streaming and AirPlay, see if they take, and let the pieces quietly get into place. Not a fiendish master plan… just preparing relevant technologies and partnerships so they can enter this war at a time and place of their choosing.


Dreaming of Streaming

Speaking of HTTP Live Streaming, one last point is to acknowledge what a tremendous success story this has been? Non-existent 5 years ago, it is now the technology that delivers all streaming video on the hundreds of millions of iOS devices, in a way that satisfies the security demands of the major media companies, sports leagues, etc. As I mentioned before, my iPad is full of video streaming apps, not just the various networks and content providers, but stuff like UStream that captures and streams live video.

The thing to start watching now is what happens with MPEG DASH, which resembles HTTP Live Streaming and similar technologies from Adobe (Adaptive Streaming) and Microsoft (Smooth Streaming), all of which use HTTP (rather than custom socket connections on frequently-blocked ports) to deliver small segments of video and adjust to changing network conditions. Streaming Media reports increasing support for the proposed DASH standard from many companies, but notably not Apple. Which makes sense in a cynical view — why let the competition catch up when content providers already have to go HLS to reach the massive iOS user base? A more practical concern is that DASH seems to want to make everybody happy by just wrapping some existing standards for codecs and manifest delivery, which may end up meaning that it just becomes the “15th standard”:

And speaking of HTTP Live Streaming, I’m preparing an all-new session on HLS for CocoaConf outside DC in late June. The early-bird deadline has been extended until this Friday (May 4), so if you want to see how cool this stuff is, you’ve still got time to save a few bucks.

Now if you’ll excuse me, I need to put this Crunchyroll subscription to work and catch up on the subscriber-only new episodes, now that I just finished Angel Beats! from iTunes last night.

App Rejections Are a Lousy Way to Communicate Policy Changes

Following Twitter reports from earlier in the week, MacRumors and TechCrunch now report that Apple is rejecting apps that use the unique device identifier (UDID), by means of the -[UIDevice uniqueIdentifier] call.

This in itself is not surprising: iOS 5 deprecates the method, adding “Special Considerations” that advise developers to create a one-off CFUUID to identify a single install of the app on a single device.

So far, so good. I even had a section of my talk at last week’s CocoaConf Chicago that talked about CFUUIDs and the need to migrate any UDID dependencies to them now.

The problem is the timing. Apple’s established pattern has been to deprecate a function or method in one major version and, at the speediest, remove the call in the next major version. Many developers, myself included, expected that we had until iOS 6 to get off of the UDID.

But instead, without warning, the app review process is being used as an immediate death-penalty for the uniqueIdentifier call.

This is a problem because we’ve all had about six months to get off of UDID, and while that’s surely enough to get a simple app migrated — indeed, I have cases where switching it out is a 5-line fix — it is not necessarily the case that everyone can be expected to have already done this.

The real problem isn’t with developers; it’s with whom we develop apps for. Our clients don’t know or care what a UDID is, nor are they aware of a single line in Apple’s documentation saying “stop using this”. Sure, it’s our job to be on top of it. But let’s imagine apps with long development cycles — big apps, or academic apps that rev for the new school year in the Fall and are largely dormant the rest of the year. It’s entirely plausible and reasonable that developers of these apps have “get off UDID” as a high-priority item in their bug trackers, but are waiting for budget and approval to start working. And what if it’s not a simple process? What if an app has some deep dependency on access to the UDID, both in the app and on a server somewhere, meaning that two different teams are going to need to deal with losing access to uniqueIdentifier, and will need to come up with a plan to migrate user records over to a new id scheme?

Well, they just lost their chance.

Captain Hindsight is in full effect in the MacRumors forums, loudly asserting that developers should have known this was coming, have had plenty of time, etc. I get that it’s the natural defensiveness about Apple, but it gets worse… because this isn’t the only case of Apple using app rejections to carry out policy changes.

Thanks perhaps to my many posts about in-app purchase, I recently heard from a group of developers who’d gotten a galling rejection. They have an app with a subscription model, and used the new “auto-renewing subscription” product. This product is far superior to the original subscriptions that I have repeatedly described as broken, as they cannot restore between a user’s devices, and do not carry state to indicate what was subscribed to and when. Auto-renewing subscriptions fix these problems, and the In-App Purchase and iTunes Connect guides had (seemingly until a couple weeks ago), clearly disparaged use of the old subscriptions in favor of the new auto-renewing subscriptions.

So imagine the surprise of my colleagues when their app was rejected for using auto-renewing subscriptions. The reason given was that they were using it for a different business plan like a data-plan model, and auto-renewing subscriptions are, according to the reviewer, reserved only for content subscriptions like Newsstand magazines. I have never seen anything to this effect in any Apple I-AP documentation. Nevertheless, the developers had to switch to the shitty, broken, old subscriptions.

In both of these cases, we see Apple breaking with their own documentation or with long-established practice with no warning, and instead using app rejections as a tool to communicate and carry out new policies. This is wretched for developers, who get caught scrambling to fix problems they didn’t know they had (or didn’t expect just yet).

It’s also terrible for Apple, because the aggrieved developers initially control the message as they flock to blogs and Twitter, leaving it to loyalist commenters and bloggers like Gruber and the Macalope to mount a rear-guard gainsaying defense. To see Apple — of all companies! — not controlling the message is astounding.

All it takes is clarity. If they’re going to make such a major change, they’ve already got our attention via e-mails, the developer portal, and many other channels. They could and should clearly state the what, why, when, and how of policy changes. “We’re getting rid of UDIDs, because they constitute a privacy risk. We’ll reject any app that calls -[UIDevice uniqueIdentifier] as of March 23, 2012.” Not that hard. They’ve done it before: a few years back, Apple required streaming media apps to use HTTP Live Streaming if they streamed more than 10MB of data — this was communicated via announcements on the developer portal a month or so before its implementation, and nobody got caught by surprise, nobody complained.

Apple has developed a reputation for capriciousness in its app review process, and a cavalier attitude towards its developer partners. It’s not undeserved. As Erica Sadun once cleverly put it, “Apple is our abusive boyfriend.”

The Audio Tools for Xcode 4.3 switcharoo

It’s great that Xcode 4.3 packs all the SDKs and essential helper apps into its own app bundle, which makes Mac App Store distribution more practical and elegant, and brings Apple into compliance with its own rules for MAS distribution. Adapting Xcode into this form is also a prerequisite for eventually delivering an iPad version, which I continue to believe is one of those things that may be impossible in the near-term and inevitable in the long-term.

However, this radical realignment means Xcode 4.3 has surely broken something in every Mac and iOS programming book. Lucky for me that mine are still in the works, though very close to being done. I updated the “Getting Started” material in iOS SDK Development this week to handle the changed install process, which along with the new chapter on testing and debugging should be going out to beta readers Real Soon Now.

And then there’s Learning Core Audio, where the situation is a little more involved. For one thing, the Core Audio book is content-complete and copy-edited, and we are now in the layout stage, with dead-tree edition just weeks away. In fact, I have to finish going over the layouts for the last half of the book by Friday, something I’m doing with iAnnotate PDF on the iPad:

20120226-155512.jpg

So this is the wrong time to be making changes, since we can only fix typos or (maybe) add footnotes. And that’s a problem because while Xcode 4.3 includes Core Audio itself in the OS X and iOS SDKs, it doesn’t include the extras anymore. And that breaks some of our examples.

This doesn’t hit us until chapter 8, where we create an audio pass-through app for OS X. This requires using the CARingBuffer, a C++ class that was already a problem because many versions of Xcode ship a broken version that needs to be replaced with an optional download. CARingBuffer lives as part of Core Audio’s “Public Utility” classes in /Developer/Extras/CoreAudio/PublicUtility.

At least it did, until Xcode 4.3. Now that there’s no installer, there’s not necessarily a /Developer directory. Well, there is, but it lives inside the app bundle at Xcode.app/Developer, and includes only some of its previous contents. For the rest, we have to use the “More Developer Tools…” menu item, which takes us to Apple’s site and some optional downloads:

Xcode 4.3 downloads page: Audio Tools

This lets us download the optional Core Audio bits as a .dmg file, the contents of which look like this:

Contents of Audio Tools .dmg

So there’s our CoreAudio/PublicUtility folder, along with the AULab and HALLab applications that used to live at /Developer/Applications/Audio. We use AULab in chapter 12 to create an AUSampler preset that we can play with MIDI, but apps can live anywhere, so this isn’t a real big problem.

PublicUtility does pose an interesting problem, though: where does it go? The code example needs the source file to build, and up to this point, we’ve used an SDK-relative path, which means that the project can’t build anymore, since it won’t find PublicUtility along that path, since the SDK itself is now inside of Xcode’s code bundle. So where can we put PublicUtility where projects can find it? A couple options spring to mind:

  1. Leave the path as-is and move PublicUtility into the Xcode app bundle. Kind of nifty, but I have zero confidence that the MAS’ new incremental upgrade process won’t clobber it. And an old-fashioned download of the full Xcode.app surely would.
  2. Re-create the old /Developer directory and put PublicUtility at its previous path. This requires changing the project to use an absolute path instead of an SDK-relative one, but /Developer is a reasonable choice for a path on other people’s machines. It suited Apple all these years after all.
  3. Forget installing PublicUtility at a consistent location and just add the needed classes to the project itself. Probably the least fragile choice in terms of builds, but means that readers might miss out on future updates to the Audio Tools download.

In the end, I chose option #2, in part because it also works for those happy souls who have not had to upgrade to Lion and thus are still on Xcode 4.2, still with a /Developer directory. There is, however, a danger that future Xcode updates will see this as a legacy directory that should be deleted, so maybe this isn’t the right answer. A thread on coreaudio-api on where to put PublicUtility hasn’t garnered any followups, so we could be a long way from consensus or best practices on where to put this folder.

For now, that’s how I’m going to handle the problem in the book’s downloadable code. This afternoon, I ran every project on Xcode 4.3, changed the paths to CARingBuffer in the one example where it’s used, and bundled it all up in a new sample code zip, learning-core-audio-xcode4-projects-feb-26-2012.zip. That will probably be the version we put up on the book’s official page once Pearson is ready for us.

Now I just have to figure out how to address this mess in a footnote or other change that doesn’t alter the layout too badly.

Achievement Unlocked: Finish “Learning Core Audio”

There’s nothing like losing your editor — to Apple, no less — to ratchet up the heat to finish a book that has been too long on the burner. But with Chuck doing exactly that at the end of December, Kevin and I had the motivation to push aside our clients and other commitments long enough to finally finish the Learning Core Audio book (yes, the title is new), and send it off to the production process.

In our final push, we went through the tech review comments and reported errata — Lion broke a lot of our example code — and ended up rewriting every example in the book as Xcode 4 projects, moving the base SDK to Snow Leopard, which allowed us to ditch all the old Component Manager dependencies. For the iOS chapter, we rev’ed up to iOS 4 as a baseline and tested against iOS 5.

One of the advantages of Xcode 4 is that the source directory is cleaner for source control (no more ever-changing build folder that you have to avoid committing), while also offering a pretty simple way to get to the “derived data” folder with the build result, which was important for us because we have a number of command-line examples that create audio files relative to the executable, and Xcode 4 makes them easy to find.

In our final push, we also managed to get in an exercise with the AUSampler, the MIDI instrument that pitch-shifts the audio file of your choice, into the wrap-up chapter. So that should keep things nice and fresh. Thanks to Apple for finally providing public guidance on how to get the .aupreset file to load into the audio unit, and how the unit deals with absolute paths to the sample audio in the app bundle case.

Updated code is available from my Dropbox: learning-core-audio-xcode4-projects-jan-03-2012.zip

From here, Pearson will take a few months to layout the book, run it by us for proofing and fixes, and send it off to the printer. So… paper copies probably sometime in Spring. The whole book — minus this last round of corrections — is already on Safari Books Online, and will be updated as it goes through Pearson’s production process.

It seems like there’s been a big uptake in Core Audio in the last year or two. More importantly, we’ve moved from people struggling through the basics of Audio Queues for simple file playback (back in iPhone OS 2.0 when Core Audio was the only game in town for media) and now we see a lot of questions about pushing into interesting uses of mixing, effects, digital signal processing, etc. Enough people have gotten sufficiently unblocked that there’s neat stuff going on in this area, and we’re fortunate to be part of it.

Five Years, That’s All We’ve Got

As mentioned before, I’m not a big fan of panels at developer conferences, and whenever I’m in one, I deliberately look for points of contention or disagreement, so that we don’t end up as a bunch of nodding heads who all toe a party line. Still, at last week’s CocoaConf in Raleigh, I may have outdone myself.

When an iOS developer in the audience asked if he should get into Mac development, most of the panel said “go for it”, but I said “don’t bother: the Mac is going to be gone in 5-10 years. And what’s going to kill it is iOS.”

This is wrong, but not for the reason I’ll initally be accused of.

First, am I overstating it? Not in the least. Just looking at things where they stand today, and the general trends in computing, I can’t see the Mac disappearing in less than five years, but I also can’t imagine it prevailing more than another 10.

This isn’t the first time I’ve put an unpopular prediction out there: in 2005, back when I was with O’Reilly and right after the Intel transition announcement, I I predicted that Mac OS X 10.6 would come out in 2010 and be Intel-only. This was called “questionable”, “dumb”, and “ridiculous advice”. Readers said I had fooled myself, claimed I was recommending never upgrading because something better is always coming (ie, a straw man argument), argued Intel wouldn’t be that big a deal, and predicted that PPC machines would still be at least as common as Intel when 10.6 came out. For the record, 10.6 came out in August, 2009 and was indeed Intel-only. Also, Intel Macs had already displaced PowerPC in the installed base by 2008.

So, before you tell me I’m an idiot, keep in mind that I’ve been told that lots of times before.

Still, punditry has been in the “Mac is doomed” prediction business for 20 years now… so why am I getting on board now?

Well, the fact that I’m writing this blog on my iPad is a start. And the fact that I never take my MacBook when I travel anymore, just the iPad. And the issue that Lion sucks and is ruining so much of what’s valuable about the Mac. But that’s all subjective. Let’s look at facts and trends.


iOS Devices are Massively Outselling Mac OS X

One easy place to go for numbers is Apple’s latest financial reports. For the quarter that ended Sept. 30, 2011 (4Q by Apple’s financial calendar), the company sold 4.89 million Macs, 11.12 million iPads, and 17.07 million iPhones. That means the iPad is outselling all models of Mac by a factor of more than 2 to 1.

The numbers also mean that iOS devices are outselling Mac OS X devices by a ratio of at least 5.7 to 1… and it must be much more, since Apple also sold 6.62 million iPods (since these aren’t broken down by model, we don’t know which are iOS devices and which aren’t).

While the Mac is growing, iOS is growing much faster, so this gap is only going to continue to grow.


Goin’ Mobile

Let’s also think about just which Macs are selling. The answer is obvious: laptops. The Mac Unit Sales chart in this April 2011 MacWorld article shows Apple laptops outselling desktops by a factor of nearly 3-to-1 for the last few quarters. Things are so cold for desktops that there is open speculation about whether the Mac Pro will ever be updated, or if it is destined to go the way of the Xserve.

Now consider: Apple has a whole OS built around mobility, and it’s not Mac OS X. If the market is stating a clear preference for mobile devices, iOS suits that better than the Mac, and the number of buyers who prefer a traditional desktop (and thus the traditional desktop OS) are dwindling.


Replace That Laptop!

Despite the fact that it’s only about 28% of Apple laptop sales, I would argue the definitive Mac laptop at this point is the MacBook Air. It’s the most affordable MacBook, and has gotten more promotion this year than any other Mac (when was the last time you saw an iMac ad?). It also epitomizes Apple’s focus on thinness, lightness, and long battery life.

But on any of these points, does it come out ahead of an iPad 2? It does not. And it costs twice as much. The primary technical advantage of the Air over the iPad 2 are CPU power, RAM, and storage.

Is there $500 worth of win in having bigger SSD options or a faster CPU?


Few People Need Trucks…

“But”, you’re saying, “I can do real work on a Mac”. Definitely true… but how much of it can you really not do on an iPad? For last month’s Voices That Matter conference, I wrote slides for two talks while on the road, using Keynote, OmniGraffle, and Textastic… the same as I would have used on my Mac Pro, except for using Xcode to work with code instead of Textastic. I also delivered the talk via Keynote off the iPad with the VGA cable, and ran the demos via the default mirroring over the cable.

I’ve gone three straight trips without the laptop and really haven’t missed it. And I’m not the only one. Technologizer’s Harry McCracken posted a piece the other week on How the iPad 2 Became My Favorite Computer.

Personally, the only thing that I think I’m really missing on my iPad is Xcode, so that I could develop on the road. I suspect we’ll actually see an Xcode for iPad someday… the Xcode 4 UI changes and its single-window model made the app much more compatible with iPad UI conventions (the hide-and-show panes could become popovers, for example). And lest you argue the iPad isn’t up to the challenge, keep in mind that when Xcode 1.0 was first released in Fall, 2003, a top-of-the-line Power Mac G5 had a dual-core 2.0GHz CPU and 512 MB RAM, whereas the current iPad 2 has a dual-core A5 running at 1.0GHz and 512 MB RAM, meaning iPad is already in the ballpark.


…and Nobody Needs a Bad Truck

“But Mac OS X is a real OS,” someone argues. Sure, but two things:

  • How much does that matter – a “real OS” means what? Access to a file system instead of having everything adjudicated by apps? I’d say that’s a defining difference, maybe the most important one. But it matters a lot less than we might think. Most files are only used in the context of one application, and many applications make their persistence mechanism opaque. If your mail was stored in flat files or a database, how would you know? How often do you really need or want the Finder? With a few exceptions, the idea of apps as the focus of our attention has worked quite well.

    What exceptions? Well, think of tasks where you need to combine information from multiple sources. Building a web page involves managing HTML, CSS, JavaScript, and graphic files: seemingly a good job for a desktop OS and bad job for iOS. But maybe it argues for a task-specific app: there’s one thing you want to do (build a web page), and the app can manage all the needed files.

  • For how long will Mac OS X be a “real OS”? – Anyone who follows me on Twitter knows I don’t like Lion. And much of what I don’t like about it is the ham-handed way iOS concepts have been shoehorned into the Mac, making for a good marketing message but a lousy user experience. Some of it is just misguided, like the impractical and forgettable LaunchPad.

    But there’s a lot of concern about the Mac App Store, and the limitations being put on apps sold through the MAS. This starts with sandboxing, which makes it difficult or impossible for applications to damage one another or the system as a whole. As Andy Ihnatko pointed out, this utterly emasculates AppleScript and Automator. Daniel Steinberg, in his “Mac for iOS Programmers” talk at CocoaConf, also wondered aloud if inter-application communication via the NSDistributedNotificationCenter will be the next thing to go. And plenty of developers fear that in time, Apple will prohibit third-party software installs outside of the MAS.

Steve Jobs once made a useful analogy that many people need cars and only a few need trucks, implying that the traditional PC as we’ve known it is a “truck”, useful to those with advanced or specific needs. And that would be fine… if they were willing to let the Mac continue to be the best truck. But instead, the creep of inappropriate iPad-isms and the iOS-like limitations being put on Mac apps are encroaching the advanced abilities that make the truck worth owning in the first place.


The Weight of the Evidence

Summarizing these arguments:

  • iOS devices are far outselling Mac OS X, and the gulf is growing
  • The iPad and the MacBook (the only Mac that matters) are converging on the same place on the product diagram: an ultra-light portable computing device with long battery life
  • iOS is already capable of doing nearly anything that Mac OS X can do, and keeps getting better.
  • Mac OS X shows signs of becoming less capable, through deliberate crippling of applications by the OS.

Taken together, the trends seem to me like they argue against the future of Mac OS X. Of the things that matter to most people, iOS generally does them better, and the few things the Mac does better seem like they’re being actively subverted.

Some people say the two platforms will merge. That’s certainly an interesting possibility. Imagine, say, an iOS laptop that’s just an iPad in a clamshell with a hardware keyboard, likely still cheaper than the Air, and the case for the MacBook gets weaker still.

Given all this, I think OS X becomes less necessary to Apple — and Apple users — with each passing year. When does it reach a point where OS X doesn’t make sense to Apple anymore? That’s what I’m mentally pencilling in: 5-10 years, maybe after one or two more releases of OS X.


Wrong

But in the beginning, I said I was wrong to say that an iOS developer shouldn’t get into Mac programming because the Mac is doomed. And here’s why that’s wrong: you shouldn’t let popularity determine what you choose to work with. Chasing a platform or a language just because it’s popular is always a sucker’s bet. For one thing, times change, and the new hotness becomes old news quickly. Imagine jumping into Go, which in 2009 grew fast enough to become the “language of the year” on the TIOBE programming language index. It’s currently in 34th, just behind Prolog, ahead of Visual Basic .NET, and well behind FORTRAN (seriously!).

Moreover, making yourself like something just because it’s popular doesn’t work. As Desktop Java faded into irrelevance, I studied C++ and Flash as possible areas of focus, and found that I really didn’t like either of them. Only when the iPhone OS came along did I find a platform with ideas that appealed to me.

Similarly, I greatly value the time I spent years ago studying Jini, Sun’s mis-marketed and overblown self-networking technology. Even though few applications were ever shipped with it — Jini made the fatal mistake of assuming a Java ubiquity that did not actually exist outside of Sun’s labs — the ideas it represented were profound. Up to that point, I had never seen an API that presented such a realistic view of networking, one that kept in mind the eight fallacies of distributed computing and made them managable, largely by treating failure as a normal state, rather than an exceptional one. In terms of thinking about hard problems and how stuff works (or doesn’t work) in the real world, learning about Jini was hugely valuable to me at the time I encountered it.

And had I known Jini was doomed, would I have been better off studying something more popular, like Java Message Service (which my colleagues preferred, since it “guaranteed” message delivery rather than making developers think about failure)? I don’t think so. And so, even if I don’t think the Mac has a particularly bright future, that’s no reason for me to throw cold water on someone’s interest in learning about the platform. There are more than 20 years of really good ideas in the Mac, and if some of them enlighten and empower you, why not go for it?

What You Missed At Voices That Matter iOS, Fall 2011

I’m surprised how fast iOS conference slides go old. I re-used some AV Foundation material that was less than a year old at August’s CocoaConf, some of it already seemed crusty, not the least of which was a performSelectorOnMainThread: call on a slide where any decent 2011 coder would use dispatch_async() and a block. So it’s probably just as well that I did two completely new talks for last weekend’s Voices That Matter: iOS Developers Conference in Boston.

OK, long-time readers — ooh, do I have long-time readers? — know how these posts work: I do links to the slides on Slideshare and code examples on my Dropbox. Those are are at the bottom, so skip there if that’s what you need. Also, I’ve put URLs of the sample code under each of the “Demo” slides.

The AV Foundation talk is completely limited to capture, since my last few talks have gone so deep into the woods on editing (and I’m still unsatisfied with my mess of sample code on that topic that I put together for VTM:iPhone Seattle in the Spring… maybe someday I’ll have time for a do-over). I re-used an earlier “capture to file and playback” example, and the ZXing barcode reader as an example of setting up an AVCaptureVideoDataOutput, so the new thing in this talk was a straightforward face-finder using the new-to-iOS Core Image CIDetector. Apple’s WWDC has a more ambitious example of this API, so go check that out if you want to go deeper.

The Core Audio talk was the one I was most jazzed about, given that Audio Units are far more interesting in iOS 5 with the addition of music, generator, and a dozen effects units. That demo builds up an AUGraph that takes mic input, a file-player looping a drum track, and an AUSampler allowing for MIDI input from a physical keyboard (the Rock Band 3 keyboard, in fact) to play two-second synth sample that I cropped from one of the Soundtrack loops, all mixed by an AUMultichannelMixer and then fed through two effects (distortion and low pass filter) before going out to hardware through AURemoteIO. Oh, and with a simple detail view that lets you adjust the input levels into the mixer and to bypass the effects.

The process of setting up a .aupreset and getting that into an AUSampler at runtime is quite convoluted. There are lots of screenshots from AULab in the slides, but I might just shoot a screencast and post to YouTube. For now, combine WWDC 2011 session #411 with Technical Note TN2283 and you have as much a fighting chance as I did.

I’ll be doing these talks again at CocoaConf in Raleigh, NC on Dec. 1-2, with a few fix-ups and polishing. The face-finder has a stupid bug where it creates a new CIDetector on each callback from the camera, which is grievously wasteful. For the Core Audio AUGraph, I realized in the AU property docs that the mixer has pre-/post- peak/average meters, so it looks like it would be easy to add level meters to the UI. So those versions of the talks will be a little more polished. Hey, it was a tough crunch getting enough time away from client work to get the sample code done at all.

Speaking of preparation, the other thing notable about these talks is that I was able to do the slides for both talks entirely on the iPad, while on the road, using Keynote, OmniGraffle, and Textastic. Consumption-only device, my ass.

Slides and code

Taking C Seriously

Dennis Ritchie, a co-creator of Unix and C, passed away a few weeks ago, and was honored with many online tributes this weekend for a Dennis Ritchie Day advocated by Tim O’Reilly.

It should hardly be necessary to state the importance of Ritchie’s work. C is the #2 language in use today according to the TIOBE rankings (which, while criticized in some quarters, are at least the best system we currently have for gauging such things). In fact, TIOBE’s preface to the October 2011 rankings predicted that a slow but consistent decline in Java will likely make C the #1 language when this month’s rankings come out.

Keep in mind that C was developed between 1969 and 1973, making it nearly 40 years old. I make this point often, but I can’t help saying it again, when Paul Graham considered the possible traits of The Hundred-Year Language, the one we might be using 100 years from now, he overlooked the fact that C had already made an exceptionally good start on a century-long reign.

And yet, despite being so widely used and so important, C is widely disparaged. It is easy, and popular, and eminently tolerated, to bitch and complain about C’s primitiveness.

I’ve already had my say about this, in the PragPub article Punk Rock Languages, in which I praised C’s lack of artifice and abstraction, its directness, and its ruthlessness. I shouldn’t repeat the major points of that article — buried as they are under a somewhat affected style — so instead, let me get personal.

As an 80’s kid, my first languages were various flavors of BASIC for whatever computers the school had: first Commodore PETs, later Apple IIs. Then came Pascal for the AP CS class, as well as a variety of languages that were part of the ACSL contests (including LISP, which reminds me I should offer due respect to the recent passing of its renowned creator, John McCarthy). I had a TI-99 computer at home (hey, it’s what was on sale at K-Mart) and its BASIC was godawful slow, so I ended up learning assembly for that platform, just so I could write programs that I could stand to run.

C was the language of second-year Computer Science at Stanford, and I would come back to it throughout college for various classes (along with LISP and a ruinous misadventure in Prolog), and some Summer jobs. The funny thing is that at the time, C was considered a high-level language. At that time, abstracting away the CPU was sufficient to count as “high-level”; granted, at the time we also drew a distinction between “assembly language” and “machine language”, presumably because there was still someone somewhere without an assembler and was thus forced to provide the actual opcodes. Today, C is considered a low-level language. In my CodeMash 2010 talk on C, I postulated that a high-level language is now expected to abstract away not only the CPU, but memory as well. In Beyond Java, Bruce Tate predicted we’d never see another mainstream language that doesn’t run in a VM and offer the usual benefits of that environment, like memory protection and garbage collection, and I suspect he’s right.

But does malloc() make C “primitive”? I sure didn’t think so in 1986. In fact, it did a lot more than the languages at the time. Dynamic memory allocation was not actually common at that time — all the flavors of BASIC of that time have stack variables only, no heap. To have, say, a variable number of enemies in your BASIC game, you probably needed to do something like creating arrays to some maximum size, and use some or all of those arrays. And of course relative to assembly language, where you’re directly exposed to the CPU and RAM, C’s abstractions are profound. If you haven’t had that experience, you don’t appreciate that a = b + c involves loading b and c into CPU registers, invoking an “add” opcode, and then copying the result from a register out to memory. One line of C, many lines of assembly.

There is a great blog from a few years ago assessing the Speed, Size, and Dependability of Programming Languages. It represents the relationship between code size and performance as a 2-D plot, where an ideal language has high performance with little code, and an obsolete language demands lots of work and is still slow. These two factors are a classic trade-off, and the other two quadrants are named after the traditional categorization: slow but expressive languages are “script”, fast but wordy are “system”. Go look up gcc – it’s clearly the fastest, but its wordiness is really not that bad.

Perhaps the reason C has stuck around so long is that its bang for the buck really is historically remarkable, and unlikely to be duplicated. For all the advantages over assembly, it maintains furious performance, and the abstractions then built atop C (with the arguable exception of Java, whose primary sin is being a memory pig) sacrifice performance for expressiveness. We’ve always known this of course, but it takes a certain level of intellecutual honesty to really acknowledge how many extra CPU cycles we burn by writing code in something like Ruby or Scala. If I’m going to run that slow, I think I’d at least want to get out of curly-brace / function-call hell and adopt a different style of thinking, like LISP.

I was away from C for many years… after college, I went on a different path and wrote for a living, not coming back to programming until the late 90’s. At that point, I learned Java, building on my knowledge of C and other programming languages. But it wasn’t until the mid-2000’s that I revisted C, when I tired of the dead-end that was Java media and tried writing some JNI calls to QuickTime and QTKit (the lloyd and keaton projects). I never got very far with these, as my C was dreadfully rusty, and furthermore I didn’t understand the conventions of Apple’s C-based frameworks, such as QuickTime and Core Foundation.

It’s only in immersing myself in iOS and Mac since 2008 that I’ve really gotten good with calling C in anger again, because on these platforms, C is a first-class language. At the lower levels — including any framework with “Core” in its name — C is the only language.

And at the Core level, I’m sometimes glad to only have C. For doing something like signal processing in a Core Audio callback, handing me a void* is just fine. In the higher level media frameworks, we have to pass around samples and frame buffers and such as full-blown objects, and sometimes it feels heavier than it needs to. If you’re a Java/Swing programmer, have you ever had to deal with a big heavy BufferedImage and had to go look through the Raster object or whatever and do some conversions or lookups, when what you really want is to just get at the damn pixels already? Seems to happen a lot with media APIs written in high-level languages. I’m still not convinced that Apple’s AV Foundation is going to work out, and I gag at having to look through the docs for three different classes with 50-character names when I know I could do everything I want with QuickTime’s old GetMediaNextInterestingTime() if only it were still available to me.

C is underappreciated as an application programming language. Granted, there’s definitely a knack to writing C effectively, but it’s not just the language. Actually, it’s more the idioms of the various C libraries out there. OpenGL code is quite unlike Core Graphics / Quartz, just like OpenAL is unlike Core Audio. And that’s to say nothing of the classic BSD and other open-source libraries, some of which I still can’t crack. Much as I loathe NSXMLParser, my attempt to switch to libxml for the sake of a no-fuss DOM tree ended after about an hour. So maybe it’s always going to be a learning process.

But honestly, I don’t mind learning. In fact, it’s why I like this field. And the fact that a 40-year old language can still be so clever, so austere and elegant, and so damn fast, is something to be celebrated and appreciated.

So, thanks Dennis Ritchie. All these years later, I’m still enjoying the hell out of C.