Rss

Archives for : November2008

Speaking at CodeMash 2009

OK, now that it’s public knowledge, I’ll be speaking at CodeMash 2009, January 7-9 in Sandusky, Ohio. I’ll be doing a half-day deep dive tutorial on iPhone SDK programming as part of the January 7 Precompiler. Actually, I’m doing the half-day tutorial twice; given the interest level in all things iPhone, that gives me a chance to work with more people and to let them loose to hit other precompiler events the rest of the day.

I’m also doing a quickie one-hour intro as a regular session on Thursday or Friday. The descriptions on the session page are borked (at least in Safari, Firefox, and IE… maybe it works in Opera?), so here’s the abstract that I sent:

The iPhone may be the most disruptive consumer electronic product released in decades, and with the release of a public SDK for third-party programmers, is one of the most important new platforms. For many developers, the iPhone SDK is completely alien, forsaking widely used languages and libraries in favor of Objective-C and Cocoa. However, these technologies were honed for years on the Mac and for those willing to invest the time to learn them, they offer a surprisingly powerful programming environment. This talk will offer an introduction to the contents of the iPhone SDK: developing with XCode and building GUIs with Interface Builder. It will introduce the Objective-C language and the core ideas and design patterns of the Cocoa Touch framework. Along with an overview of the major APIs of the iPhone platform, the talk will also discuss the processes by which you can get your application onto end-user devices, either via ad hoc distribution within the enterprise or via the Apple App Store to the general public.

I had hoped to go to this conference last year, but Sun Tech Days Atlanta intervened. Now that I’m just a four-hour drive from Sandusky, it makes a lot more sense.

Come for the APIs, stay for the indoor waterslides..

WebKit fixed fast

I’ve been using the WebKit nightly build as my preferred browser for about a year (ever since they put in the HTML5 <video> tag support, which of course is now ubiquitous in modern browsers).

On Sunday, I found a crashing bug, filed it, tracked the bug it turned out to be a duplicate of, and as of last night’s build, the bug is fixed.

Damn, that’s fast. Thanks guys, you rock.

My emerging mental media taxonomy

Back when we did the iPhone discussion on Late Night Cocoa, I made a point of distinguishing the iPhone’s media frameworks, specifically Core Audio and friends (Audio Queue Services, Audio Session, etc.), from “document-based” media frameworks like QuickTime.

This reflects some thinking I’ve been doing over the last few months, and I don’t think I’m done, but it does reflect a significant change in how I see things and invalidates some of what I’ve written in the past.

Let me explain the breakdown. In the past, I saw a dichotomy between simple media playback frameworks, and those that could do more: mix, record, edit, etc. While there are lots of media frameworks that could enlighten me (I’m admittedly pretty ignorant of both Flash and the Windows’ media frameworks), I’m now organizing things into three general classes of media framework:

  • Playback-only – this is what a lot of people expect when they first envision a media framework: they’ve got some kind of audio or audio/video source and they just care about rendering to screen and speakers. As generally implemented, the source is generally opaque, so you don’t have to care about the contents of the “thing” you’re playing (AVI vs. MOV? MP3 vs. AAC? DKDC!), but you also can’t generally do anything with the source other than play it. Your control may be limited to play (perhaps at a variable rate), stop, jump to a time, etc.

  • Stream-based – In this kind of API, you see the media as a stream of data, meaning that you act on the media as it’s being processed or played. You generally get the ability to mix multiple streams, and add your own custom processing, with the caveat that you’re usually acting in realtime, so anything you do has to finish quickly for fear you’ll drop frames. It makes a lot of sense to think of audio this way, and this model fits two APIs I’ve done significant work with: Java Sound and Core Audio. Conceptually, video can be handled the same way: you can have a stream of A/V data that can be composited, effected, etc. Java Media Framework wanted to be this kind of API, but it didn’t really stick. I suspect there are other examples of this that work; the Slashdot story NVIDIA Releases New Video API For Linux describes a stream-based video API in much the same terms: ‘The Video Decode and Presentation API for Unix (VDPAU) provides a complete solution for decoding, post-processing, compositing, and displaying compressed or uncompressed video streams. These video streams may be combined (composited) with bitmap content, to implement OSDs and other application user interfaces.’.

  • Document-based – No surprise, in this case I’m thinking of QuickTime, though I strongly suspect that a Flash presentation uses the same model. In this model, you use a static representation of media streams and their relationships to one another: rather than mixing live at playback time, you put information about the mix into the media document (this audio stream is this loud and panned this far to the left, that video stream is transformed with this matrix and here’s its layer number in the Z-axis), and then a playback engine applies that mix at playback time. The fact that so few people have worked with such a thing recalls my example of people who try to do video overlays by trying to hack QuickTime’s render pipeline rather than just authoring a multi-layer movie like an end-user would.

I used to insist that Java needed a media API that supported the concept of “media in a stopped state”… clearly that spoke to my bias towards document-based frameworks, specifically QuickTime. Having reached this mental three-way split, I can see that a sufficiently capable stream-based media API would be powerful enough to be interesting. If you had to have a document-based API, you could write one that would then use the stream API as its playback/recording engine. Indeed, this is how things are on the iPhone for audio: the APIs offer deep opportunities for mixing audio streams and for recording, but doing something like audio editing would be a highly DIY option (you’d basically need to store edits, mix data, etc., and then perform that by calling the audio APIs to play the selected file segments, mixed as described, etc.).

But I don’t think it’s enough anymore to have a playback-only API, at least on the desktop, for the simple reason that HTML5 and the <video> tag commoditizes video playback. On JavaPosse #217, the guys were impressed by a blog claiming that a JavaFX media player had been written in just 15 lines. I submit that it should take zero lines to write a JavaFX media player: since JavaFX uses WebKit, and WebKit supports the HTML5 <video> tag (at least on Mac and Windows), then you should be able to score video playback by just putting a web view and an appropriate bit of HTML5 in your app.

One other thing that grates on me is the maxim that playback is what matters the most because that’s all that the majority of media apps are going to use. You sort of see this thinking in QTKit, the new Cocoa API for QuickTime, which currently offers very limited access to QuickTime movies as documents: you can cut/copy/paste, but you can’t access samples directly, insert modifier tracks like effects and tweens, etc.

Sure, 95% of media apps are only going to use playback, but most of them are trivial anyways. If we have 99 hobbyist imitations of WinAmp and YouTube for every one Final Cut, does that justify ignoring the editing APIs? Does it really help the platform to optimize the API for the trivialities? They can just embed WebKit, after all, so make sure that playback story is solid for WebKit views, and then please Apple, give us grownups segment-level editing already!

So, anyways, that’s the mental model I’m currently working with: playback, stream, document. I’ll try to make this distinction clear in future discussions of real and hypothetical media APIs. Thanks for indulging the big think, if anyone’s actually reading.

Maple Fail

I went ahead and upgraded Parallels, kind of in hopes that Maple Story would work without a) having to run in Boot Camp, or b) running without DirectX support (and therefore at a crawl), or c) having the Windows .exe terminate with a dialog accusing me of an “Illegal hacking attempt”.

The result: my character (“Quaoar”, near the center of the screen) seems to have picked up the texture from a “Wizet Wizard” label elsewhere on the screen:

Fail. Amusingly, turning off 3D acceleration now completely fails (instead of just making the game hopelessly slow), with Maple Story complaining of an unsupported graphics mode.

If Windows is only good for games, and this game still doesn’t work in Parallels… I don’t suspect I’ll be using Parallels that much after all.

Open source pathologies observed, #1

Here’s an exchange I think I see almost every day. We’ll call the typical developer and the OSS developer “John” and “Mary” respectively, setting aside the fact that all the female OSS developers out there could probably fit comfortably in one booth at Chili’s.

JOHN: I need an X.

MARY: We don’t have an X. Why don’t you write one yourself and contribute it back to us?

JOHN: Because if I had the skill or time to write it myself, I wouldn’t be looking for one. Plus, writing it for general consumption will take at least twice as long, so… don’t hold your breath.

MARY: Screw you, corporate tool.

JOHN: Bite my ass, hippie.

And we’re done here.

ComposiCoaxiaComponent cables at Target

Maybe all the laid-off Circuit City staffers can get jobs helping customers at Target, which apparently could use the help:

Click the image and look closely: this sign (from a Grand Rapids store, but presumably the same everywhere) identifying analog A/V connections gets composite and coaxial cables backwards. Composite is the yellow RCA cable, which has luminance and chrominance on the same wire (as opposed to say, S-Video), and is completely independent of an audio signal, while coaxial is the twist-on cable with the little core wire sticking out from the white insulator.

I know, you’d rather not have to use either, though my Wii is actually on composite right now because I’ve used up my component inputs with PS2 and the DirecTV HD DVR. I should probably move the Wii to S-Video at least, but someday when I have a little money, the old tube HDTV will move out of the basement and we’ll get a flatscreen with HDMI and switch all the inputs to that. Or whatever the next crazy cable standard is.

Link: Time to standardize on H.264?

Editor’s Note from the latest issue of Streaming Media asks if it’s time to standardize on H.264 for online video:

Why have competing video formats at all? That question has long seemed polyannaish to those on the Streaming Media lists who are invested heavily in one proprietary technology or another, but now that Microsoft Silverlight has finally joined Adobe in supporting H.264 playback—QuickTime and RealPlayer were ahead of the game on this one—our industry needs to evaluate whether or not it’s time to agree upon H.264 as the standard for all online video.

Actually, this is news to me that Silverlight now supports H.264, but I don’t track the Microsoft technologies all that closely (I know, but hours in the day, better things to do, etc…). But it probably helps a large group of their users who are already interested in H.264 for other reasons, and who might not adopt Silverlight as a delivery platform if it insisted on using MS video (as far as I know, the MS stuff is fine, just different). And it just helps to further reinforce H.264’s virtuous circles: more potential clients means more companies working on encoders (the by-design competitive half of the MPEG standards), which means higher quality at lower bitrates, which makes the codec even more appealing, so more people adopt it, and so on.

Come to think of it, it would be interesting to know if the Silverlight-based Netflix on Demand for Mac is using H.264 or VC-1 or some other MS codec. BTW, the Netflix deal probably gives real legitimacy to Silverlight as a cross-platform technology in the eyes of a lot of Mac users. At the end of the day, they don’t want to be denied content because of their platform choice and if Microsoft (of all people!) can help that, then so be it.

Left out in the cold, unsurprisingly, is JavaFX and Sun’s typically bizarre choice of the On2 no-name codec. I’ve bashed this before, and I assume that the end-of-the-day reason is that Sun just doesn’t have the money to license H.264, but now with both Flash and Silverlight supporting H.264, JavaFX is even more of an odd man out. Apparently, the premise here is that there really is an audience out there for a Flash-workalike that uses its own weird language, its own weird plug-in, its own weird codec, etc…