Rss

Archives for : flash

More things in Heaven and Earth

There are more things in Heaven and Earth, Horatio; than are dreamt of in your philosophy.
Hamlet, Act 1, Scene V

There’s suddenly a lot of conventional wisdom that says the rise and eventual dominance of Android is manifest, and inevitable. Some of these claims make dubious analogies to Windows’ defeat of the Mac in the 90’s, ramming square pegs through round holes to make the analogy stick (to wit: who are the hardware manufacturers this time, the handset makers or the carriers). It may indeed come to pass, but the reasoning behind these claims is pretty shallow thusfar.

Case in point: an Appcelerator survey covered in The Apple Blog story Devs Say Android is Future-Proof. iOS? Not So Much. The reasoning for Android’s perceived advantage? This article doesn’t mention Android’s license terms and widespread hardware adoption (maybe that’s taken for granted at this point?), and instead mentions only the appeal of writing apps for GoogleTV, a product that is not even out yet (meaning Adamson’s First Law applies), to say nothing of how many purported “interactive television revolutions” we’ve suffered through over the decades (Qube, videotex, WebTV, Tru2Way, etc.). Maybe it’ll be the next big thing, but history argues otherwise.

In the 90’s, the rise of Java seemed an obvious bet. Applets would make web pages far more compelling than static pages and lengthy form submits, and application developers would surely be better off with garbage collection and strong typing than with C and C++. Java was so sure to be big, that Microsoft threw the full force of its dirty tricks machine at it, while Apple exposed most of the Mac’s unique libraries to Java bindings (including, at various times, QuickTime, Cocoa, Core Audio, speech, and more). But it didn’t work out that way: Java on the browser was displaced by JavaScript/Ajax, and the early attempts to write major desktop applications in Java were unmitigated disasters, with the Netscape Navigator port abandoned, and Corel’s Java version of Word Perfect Office was buried almost immediately after it was released. 1996’s sure bet was a has-been (or a never-was) by 2001.

If you think about it, the same thing happened a few years ago with AIR. With the YouTube-powered rise of Flash, AIR seemed a perfect vehicle to bring hordes of Flash developers to the desktop. Everyone knew it would be big. Except it wasn’t. AIR applications are rare today, perhaps rarer even than Java. Admittedly, I only remembered of AIR’s existence because I needed to download the AIR-powered Balsamiq application for a client this week… exception that proves the rule, I guess?

My point in all this is that the conventional wisdom about platform success has a tendency to be selective in considering what factors will make or break a platform. Licensing, corporate support, community, and of course the underlying technology all play a part. Android is greatly enhanced by the fact that Google puts talented people behind it and then gives it away, but if carriers then use it to promote their own applications and crapware over third-party apps (or cripple them, as they did with JavaME), then Android’s advantage is nil. On the other hand, Apple’s iOS may have remarkable technology, but if their model requires using their corporate strength to force carriers to be dumb pipes, then they may only be able to get iPhone on weaker carriers, which will turn off consumers and retard growth of the platform.

Ultimately, it’s hard to say how this will all play out, but assuming an Android victory based on the presumed success of currently non-existent tablets and set top boxes is surely an act of faith… which probably accounts for all the evangelism.

So why am I on iOS now? Is it because I have some reason to think that it will “win”? Not at all. Mostly it’s because I like the technology. In the mid 2000’s, when user-facing Java was in terminal decline, I tried to learn Flash and Flex to give myself more options, but I just couldn’t bring myself to like it. It just didn’t click for me. But as I got into Cocoa and then the iPhone SDK, I found I liked the design patterns, and the thoughtfulness of all of it. The elegance and power appealed to me. Being a media guy, I also appreciate the platform’s extraordinary support for audio and video: iOS 4 has three major media APIs (AV Foundation, Core Audio, and Media Player), along with other points of interest throughout the stack (video out in UIKit, the low-level abstractions of Core Media, spatialized sound in OpenAL, high-performance DSP functions in the Accelerate framework, etc.). The android.media package is quite limited by comparison, offering some canned functionality for media playback and a few other curious features (face recogniation and dial tone generation, for example), but no way to go deeper. When so many media apps for Android are actually server-dependent, like speech-to-text apps that upload audio files for conversion, it says to me there’s not much of a there there, at least for the things I find interesting.

Even when I switched from journalism and failed screenwriting to programming and book-writing in the late 90’s, at the peak of the Microsoft era, I never considered for a second the option of learning Windows programming and adopting that platform. I just didn’t like their stuff, and still don’t. The point being that I, and you, don’t have to chase the market leader all the time. Go with what you like, where you’ll be the most productive and do the most interesting work.

There’s a bit in William Goldman’s Adventures in the Screen Trade (just looked in my copy, but couldn’t find the exact quote), where the famous screenwriter excuses himself from a story meeting, quitting the project by saying “Look, I am too old, and too rich, to have to put up with this shit.” I like the spirit of that. Personally, I may not be rich, but I’m certainly past the point where I’m willing to put up with someone else’s trite wisdom, or the voice of the developer mob, telling me where I should focus my skills and talents.

Jimmy Gosling Said (I’m Irrelevant When You Compile)

Much renewed hope and delight last week, after Apple pulled back from its most audacious and appalling land-grabs in its iOS developer agreement, notably the revised section 3.3.1 that prohibited any languages but C, Objective-C, and C++ for iOS development (Daring Fireball quotes the important changes in full). Whether this is the result of magnanimity or regulatory pressures in the U.S. and Europe is unknowable and therefore unhelpful. What’s interesting is thinking about what we can do with our slightly-loosened shackles.

For example, it would be interesting to see if someone, perhaps a young man or lady with a mind for mischief or irony, could bring Google’s Go programming language to iOS development. Since the Go SDK runs on Mac and can compile for ARM, it might well be possible to have a build script call the Go compiler as needed. And of all the hot young languages, Go might be the most immediately applicable, as it is compiled, rather than interpreted.

And that brings up the other major change, the use of interpreters. Nobody seems to be noting that the change in section 3.3.2 is not just a loosening of this Spring’s anti-Flash campaign, but is in fact far more lenient than this policy has ever been. Since the public SDK came out in 2008, all forms of interpreted code have been forbidden. This is what dashed early plans to bring Java to the iPhone as an application runner, despite its absence as an applet runner in Safari. As Matt Drance has pointed out, the new policy reflects the reality on the ground that interpreters (especially Lua) have been tolerated for some time in games. The new phrasing forbids downloading of executable content, but allows for cases where the interpreter and all scripts are included in the app bundle. This has never been allowed before, and is a big deal.

Now let me stretch the definition of “interpreter” a bit, to the point where it includes virtual machines. After all, the line between the two is hard to define: a “virtual machine” is a design philosophy, not a technical trait. A VM uses an interpreter (often a byte code interpreter rather than source, but not necessarily), and presumably has more state and exposes more library APIs. But languages and their interpreters are getting bigger – Ruby I/O is in the language rather than in a library (like C or Java), but that doesn’t make Ruby a VM, does it?

You might have surmised where I’m going with this: I don’t think the revised section 3.3.2 bans a hypothetical port of the Flash or Java VMs to iOS anymore, if they’re in a bundle with the .swf or .jar files that they will execute.

I could be wrong, particularly given Steve Jobs’ stated contempt for these sorts of intermediary platforms. But if a .swf-and-Flash-VM bundle were rejected today, it would be by fiat, and not by the letter of section 3.3.2.

Whether any of this matters depends on whether anyone has stand-alone Flash applications (such as AIR apps) or Java applications that have value outside of a browser context, and are worth bringing to a mobile platform.

SFX: CRICKETS CHIRPING

I can’t say why AIR never seemed to live up to its billing, but the failings of Desktop Java can in part be blamed on massive neglect by Sun, exacerbated by internecine developer skirmishes. Swing, the over-arching Java UI toolkit, was plagued by problems of complexity and performance when it was introduced in the late 90’s, problems that were never addressed. It’s nigh impossible to identify any meaningful changes to the API following its inclusion in Java 1.2 in 1998. Meanwhile, the IBM-funded Eclipse foundation tied their SWT more tightly to native widgets, but it was no more successful than Swing, at least in terms of producing meaningful apps. Each standard powers one IDE, one music-stealing client, and precious little else.

So, aside from the debatability of section 3.3.2, and wounded egos in the Flash and Java camps, the biggest impediment to using a “code plus VM” porting approach may be the fact that there just isn’t much worth porting in the first place.

Speaking of Desktop Java, the Java Posse’s Joe Nuxoll comes incredibly close to saying something that everybody in that camp needs to hear. In the latest episode, at 18:10, he says “…gaining some control over the future of mobile Java which, it’s over, it’s Android, it’s done.” He later repeats this assertion that Android is already the only form of mobile Java that matters, and gets agreement from the rest of the group (though Tor Norbye, an Oracle employee, can’t comment on this discussion of the Oracle/Google lawsuit, and may disagree). And this does seem obvious: Android, coupled with the rise of the smartphone, has rendered Java ME irrelevant (to say nothing of JavaFX Mobile, which seems stillborn at this point).

But then at 20:40, Joe makes the big claim that gets missed: “Think of it [Android] as Desktop Java for the new desktop.” Implicit in this is the idea that tablets are going to eat the lunch of traditional desktops and laptops, and those tablets that aren’t iPads will likely be Android-based. That makes Android the desktop Java API of the future, because not only have Swing, SWT, and JavaFX failed, but the entire desktop model is likely threatened by mobile devices. There are already more important, quality, well-known Android apps after two years than the desktop Java APIs produced in over a decade. Joe implies, but does not say, what should be obvious: all the Java mobile and desktop APIs are dead, and any non-server Java work of any relevance in the future will be done in Android.

No wonder Oracle is suing for a piece of it.

A Big Bet on HTTP Live Streaming

So, Apple announced yesterday that they’ll stream today’s special event live, and everyone immediately assumed the load would crash the stream, if not the whole internet, myself included. But then I got thinking: they wouldn’t even try it if they weren’t pretty damn sure it would work. So what makes them think this will work?

HTTP Live Streaming, that’s why. I banged out a series of tweets (1, 2, 3, 4, 5, 6, 7, 8, 9) spelling out why the nature of HTTP Live Streaming (which I worked with briefly on a fix-up job last year) makes it highly plausible for such a use.

To summarize the spec: a client retrieves a playlist (an .m3u8, which is basically a UTF-8’ed version of the old WinAmp playlist format) that lists segments of the stream as flat files (often .m4a’s for audio, and .ts for video, which is an MPEG-2 transport stream, though Apple’s payload is presumably H.264/AAC). The client downloads these flat files and sends them to its local media player, and refreshes the playlist periodically to see if there are new files to fetch. The sizing and timing is configurable, but I think the defaults are like a 60-second refresh cycle on the playlist, and segments of about 10 seconds each.

This can scale for a live broadcast by using edge servers, which Apple has long depended on Akamai (and others?) for. Apple vends you a playlist URL at a local edge server, and its contents are all on the edge server, so the millions of viewers don’t pound Apple with requests — the load is pushed out to the edge of the internet, and largely stays off the backbone. Also, all the local clients will be asking for the same handful of segment files at the same time, so these could be in in-memory caches on the edge servers (since they’re only 10 seconds of video each). All these are good things.

I do wonder if local 3G cells will be a point of failure, if the bandwidth on a cell gets saturated by iPhone clients receiving the files. But for wired internet and wifi LANs, I suspect this is highly viable.

One interesting point brought up by TUAW is the dearth of clients that can handle HTTP Live Streaming. So far, it’s iOS devices, and Macs with QuickTime X (i.e., running Snow Leopard). The windows version of QuickTime doesn’t support HTTP Live Streaming (being based on the “old” 32-bit QuickTime on Mac, it may effectively be in maintenance mode). Open standard or not, there are no handy HTTP Live Streaming clients for other OS’s, though MacRumors’ VNC-based workaround (which requires you to manually download the .m3u8 playlist and do the refresh yourself), suggests it would be pretty easy to get it running elsewhere, since you already have the ability to play a playlist of segments and just need to automate the playlist refresh.

Dan Leehr tweeted back that Apple has talked a good game on HTTP Live Streaming, but hasn’t really showed much. Maybe this event is meant to change that. Moreover, you can’t complain about the adoption — last December, the App Store terms added a new fiat that any streaming video app must use HTTP Live Streaming (although a February post seems to ratchet this back to apps that stream for more than 10 minutes over the cellular network), so any app you see with a video streaming feature almost certainly uses HLS. At WWDC, Apple boasted about the MLB app using HLS, and it’s a safe bet that most/all other iOS video streaming apps (Netflix, Crunchyroll, etc.) use it too.

And one more thing to think about… MLB and Netflix aren’t going to stream without DRM, right? That’s the other piece that nobody ever talks about with HTTP Live Streaming: the protocol allows for encrypting of the media files. See section 5 of the spec. As much as Apple and its fanboys talk up HTML5 as a rival to and replacement for Flash, this is the thing that should really worry Adobe: commoditizing DRM’ed video streaming.

Secret Video Attack Vector

Thinking about the iPad and the appealing idea of watching video on this device, the usual objections come up about being limited to the Apple iTunes ecosystem, and being shut out from great stuff, just like the AppleTV is.

But wait a second, there’s a second way to get video on your iPad, just like on an iPhone/iPod touch: apps can stream video. In fact, there are a bunch of these already. In many cases, the apps scratch the itch of certain niches. Apple continues to trot out the MLB app to show off crowd-pleasing baseball video, but I notice that I have downloaded at least four apps for streaming anime: Crunchyroll, Babelgum, Joost, and The Anime Network. Looking on the App Store, I see a few others, including one app that exists just as a container for the first volume of the charming His and Her Circumstances.

Let’s think here. Some of these services (Crunchyroll and Anime Network) offer Flash-based viewers on the web, but they’re not in the Flash business, they’re in the content business, so they use a different technology to get their content to iPhone OS viewers.

And what is that technology? By Apple fiat, it is HTTP Live Streaming, which Apple recently declared as the only permitted technology for streaming video to iPhone OS devices. Technically, it seems like you could also use HTML5 <video> tags in a UIWebView, but there are lots of nice reasons to use HTTP Live Streaming (content providers are probably happy to see DRM in the spec, even if most of us consumers aren’t).

So here’s what I’m wondering. If content providers are going to have to use HTTP Live Streaming to support all the iPhone OS devices, is this eventually going to put pressure on Flash? All that has to happen is for HTTP Live Streaming to become more viable on desktops, and then you’ll have a video distribution solution that skips the Adobe tax and the extra step of Flash encoding. HLS is supported by QuickTime X on Snow Leopard, but I don’t believe that the Windows version of QuickTime handles it yet. Pity.

Still, if I were Adobe, I’d be pretty concerned about HTTP Live Streaming right now.

Flash? Go on…

Much tweetage today over the announcement of Flash for iPhone. Not as a browser plug-in, which Apple still refuses (and which would be desirable for the many Flash-dependent pages out there), but as a cross-compiler solution: take a Flash project, hit export, get an iPhone app.

Fine in theory. In fact, kind of neat. But what’s the point? I had a torrid session of tweetage with friends, my main point being that the most typical use of Flash is in enhancing web pages, and there’s not much value bringing a fragment of a web page over as a stand-alone iPhone app. The number of cases where you spend a significant amount of time directly interacting with Flash is pretty low and, Mad Men Yourself notwithstanding, most of them are games (which the iPhone already has a glut of). I hoped to leave things with this:

In summary: Flash offers 2 things browsers don’t have: graphics-rich runtime and media. Cocoa Touch lacks neither.

I’m not trying to make the case that Cocoa Touch and Objective-C is better than Flash and ActionScript, I just don’t see how this solves anyone’s real problems. [Cynical aside: except for Adobe’s need to sell more Flash CS Professional licenses]. It’s impressive from a technological point of view, but is there really that much Flash code that’s going to be viable and valuable as stand-alone iPhone applications?

Actually, there was one more tweet I held back from sending, because I didn’t want the debate to get nasty. Since nobody reads this blog, I’ll post it here:

Meanest thing I could say about Flash-for-iPhone: “That sounds like something Sun would do.”

Ogg: The “Intelligent Design” of digital media

Well, another go ’round with this: HTML5 won’t mandate Ogg as universally-supported codecs, and the freetards are on a tear. I was going to follow up on a JavaPosse thread about this, but I hurled enough abuse onto their list last week.

It’s abundantly clear in this blog that I don’t think Ogg is the solution that its supporters want it to be: I have a whole tag for all the posts where I dismiss Vorbis, Theora, and friends. Among these reasons:

  • I don’t think it’s technically competitive.

  • It certainly isn’t competitive in terms of expertise and mindshare, which is vitally important in media codecs: there’s a much deeper pool of shared knowledge about the MPEG codecs, which leads to chip-level support, competition among encoders, compressionists who understand the formats and how to get the most out of them, etc.

  • Its IP status remains unclear. With even MPEG-4, following a lengthy and formal patent pooling process, attacked by AT&T’s claim of a submarine patent, I have no reason to think that Ogg wouldn’t face similar claims, legitimate or not, if there was any money behind it, which there isn’t.

  • If I go to my former colleagues at CNN or in Hollywood and say “you guys should use Ogg because…”, there are no words in the English language that plausibly complete the sentence and appeal to the rational self-interest of the other party.

On this last point, I’ve got an ugly analogy: just as proponents of “Intelligent Design” are people who don’t really care about biology beyond the point at which it intrudes on their religious belief, so too do I think Ogg advocates generally don’t know much about media, but have become interested because the success of patent-encumbered formats and codecs is an affront to their open-source religion.

Ogg’s value is in its compatibility with the open source religion. It has little to offer beyond that, so it’s no surprise that it has zero traction outside of the Linux zealot community. Even ESR realized that continually shouting “everything should be in Ogg” was a losing strategy, and he said that three years ago.

I think the open source community would like to use HTML5 to force Ogg on the web community, but it’s not going to work. As others have pointed out, there’s little reason to think that IE will ever support HTML5. Even if they do, the <video> tag is not going to replace Flash or Silverlight plug-ins for video. Despite my initial enthusiasm for the <video> tag commoditizing video, I see nothing in the spec that would support DRM, and it’s hard to imagine Big Content putting their stuff on web pages without DRM anytime soon. And while you can put multiple media files in a <video> tag easily enough, having to encode/transcode to multiple formats is one reason that Big Content moved away from the Real/WMP/QuickTime switch to the relative simplicity of works-for-everyone Flash.

I’m tired of being lectured by computer people about media; it’s as ludicrous as being lectured about computers by my old boss at Headlines. Just because you use YouTube, doesn’t make you an expert, any more than my knowing how to use a username and password means I understand security (seriously, I don’t, and doubt I ever will). Kirill Grouchnikov pretty much nailed what computer people think good video is with this tweet. I’ll add this: there are probably a thousand people at Sun who understand 3D transformations in OpenGL, and maybe five who know what an Edit Decision List is. So they go with what they know.

A couple years back, I gave a JavaOne BoF in which I made a renewed call for a Java Media library which would support sample-level access and some level of editing, arguing that enabling users to take control of their own media was a manifest requirement of future media frameworks. By a show of hands, most of the developers in the audience thought it would be “enough” to just support playback of some modern codecs. JavaFX now provides exactly that. Happy now?

People who actually work in media don’t mind paying for stuff, and don’t mind not owning/sharing the IP. Video production professionals are so accustomed to standardizing on commercial products, many of them become generic nouns in industry jargon: “chyron” for character generators, “grass valley” for switchers, “teleprompters”, “betacam” tape, etc. Non-free is not a problem here. And if your argument for open-source is “you’re free to fix it if it doesn’t do what you want it to,” the person who has 48 shows a day to produce is going to rightly ask “why would I use something that doesn’t work right on day one?”

The open source community doesn’t get media. Moreover, it doesn’t get that it doesn’t get media. The Ogg codecs placate the true believers, and that’s the extent of their value.

Hail Xeon!

The freeze-out on using PowerPC for iPhone development — somewhat inexplicable since people have reported getting the various iPhone SDK betas mostly working with PPCs (seriously, Apple, would supporting PPC really be that burdeonsome?) — has forced me to replace my G5 with an Intel box a year sooner than planned.

Mac Pro (Yuna) and Power Mac (Aeris)

So, here they are, side by side after Firewiring them together for the Migration Assistant during setup. Hard to tell which is which, right? The one on the left is Yuna, the new Mac Pro, and the one on the right is Aeris, the old G5.

Anyone who knows my naming convention might have figured that “Yuna” was the next name in the series, since I name each machine’s partitions after characters in the Final Fantasy games, and I’m up to Final Fantasy X, meaning this box’s partitions are Yuna, Lulu, and Rikku. One thing I do with this convention is to use screenshots or fan art (thank goodness for DeviantArt and all the FF fanboys and fangirls out there) of the character whose name is used for the current partition, giving me an instant reminder of which partition or machine I’m on.

Yuna desktop

I’ve only been up a few hours, and haven’t really pushed the 8-cores very hard:

8 cores not doing much

I feel like I should re-export my AMV from Final Cut or do an MPEG transcode or something that’ll spin up the CPUs.

Or maybe I’ll hit the system-grinding Flash slowness of the My Coke Rewards site, which used to bring the old G4 laptop to a near halt (who knew you needed multi-core gigabyte speed to animate a dialog popping up… seriously Coke and/or Flash, what is your problem?)

Commoditizing embedded video: the HTML5 video tag

Surfin’ Safari notes initial support for the HTML5 <video> and <audio> tags in their latest nightly builds.

Indeed, if you have a browser that supports the video tag, then you (hopefully) can see an autoplaying video here:

There’s been a little bit of controversy over the fact that calls for inclusion of the Ogg formats have been removed in more recent versions of the spec. Section 3.14.7.1, “Video and audio codecs for video elements”, currently reads:

It would be helpful for interoperability if all browsers could support the same codecs. However, there are no known codecs that satisfy all the current players: we need a codec that is known to not require per-unit or per-distributor licensing, that is compatible with the open source development model, that is of sufficient quality as to be usable, and that is not an additional submarine patent risk for large companies. This is an ongoing issue and this section will be updated once more information is available.

That more or less matches my take on Ogg, which is that it poses an unknown patent liability risk: the /. mob insists it’s patent-free, but how the hell do they know? They don’t; they just want it to be so, because it suits their worldview. And Ogg may indeed be patent-free, but I don’t think anybody knows for sure, and even so, proving it would be expensive. To top it all off, Ogg just isn’t that popular or useful outside the warm bubble of Linux zealotry.

Still, there’s a huge need for at least one video and audio codec to be available more less everywhere, or at least for one class of devices: i.e., one codec you can expect all desktops to have, one for all phones, etc. To just dump out to “whatever QuickTime supports on the Mac, whatever Windows Media supports on Windows, etc.” ends up moving the problem, either to the web author (who has to sniff the OS from the user-agent and write the tag on the fly… to say nothing of hosting multiple encodings of every clip) or to the end-user.

It’s funny, because while the HTML5 <video> tag should displace Flash as the only practical option for web video — something that’s become screamingly obvious in the two years or so since YouTube launched — it might not, if it gets tangled up in codec hassles. The remarkable thing about Flash Video isn’t that it’s good (it’s not), but that it’s consistent and available on all Flash-enabled desktops.

That’s turned out to be a much bigger deal than the quality of competitors like H.264 and WMV, or the fact that other approaches could support many more codecs. Flash doesn’t try to support every codec under the sun, or even offer extension points for third parties to do so, but it doesn’t matter — with a known-viable video codec, content providers can just push their content with the package-deal of FLV and the Flash plug-in. Sure, the QuickTime plugin, a QuickTime for Java applet (or even a JMF applet, fercryinoutloud) could support more formats and codecs than Flash, but the typical use is not a general-purpose “play arbitrary content” application; the web-embedded player is usually meant to play the content from a single content provider, who’s perfectly happy to use a single, sub-optimal format, if the alternative is having to encode everything a dozen ways from Sunday to support the various OSes and devices.

Which makes me think that Flash’s ubiquity as a web-embedded video player won’t be threatened by HTML5, so long as there is neither a de jure nor de facto ubiquitous video codec for HTML5. Ironically, while H.264 might be the best candidate for that, Flash is already supporting it too.

Which leads me to an idea: if you were writing a browser on a platform without H.264 support from the native multimedia library, but you had Flash available, could you just pull the Flash player into service on the fly and have it play the H.264?

About Apple’s new iPhone ads

Most interesting thing in Apple’s new iPhone ads: the combination of natural lighting and Chiaroscuro (achieved by using a black drop on a beautiful sunny day, as revealed by the final shot), or the apparent need to hide the usual QuickTime scrubber in favor of a non-standard JavaScript scrubber that looks and behaves more (dare I say it?) Flash-like?

Does H.264 in Flash matter? (hint: um, yeah…)

I was listening to the This Week in Media podcast a few weeks ago, and they were reacting to the news that Flash will be adopting H.264 in a future version. They were of course delighted, and said that this basically means that “H.264 wins”. Flash’s distribution is powerful, so maybe in a codec race, this may be the case, but it’s interesting that this development could really trump QuickTime in the browser, even though QuickTime adopted, evangelized, and popularized H.264 earlier. What does Flash have that QuickTime doesn’t? Must-have apps like YouTube that will convince people to update their Flash installs, along with a better cross-platform story.

There’s also an irony, because I remember a conversation (that I can’t find now) on the QuickTime-Users list from sometime earlier in the year, when a poster basically asserted that “hey, when people get sick of crappy Flash video, they’ll come running to QuickTime,” apparently overlooking the very obvious and easy step of Adobe just licensing H.264 for Flash.

Having said that, I wondered just how big a difference it really makes. I think the key to the crappiness of YouTube is not just the basic Flash Video codec (a variant of the fairly old H.263), but also the very limited bitrate. From a little Googling on the topic, it looks like YouTube video is encoded somewhere between 250 and 300 kbps, 320×240.

But just to make sure I wasn’t talking out my butt, I decided to do an experiment. I took some old camcorder footage of Keagan when he was three, and encoded it as H.263 and H.264, limiting the video bitrate to 300 kbps in both cases. The source is 720×480 DV, so I’m really crunching it down.

So, is there a quality difference? Assuming you’re on Mac or Windows (because I’ve encoded these as QuickTime movies and used the QuickTime OBJECT and EMBED tags), you tell me:

320×240 H.263 movie

 

320×240 H.264 movie

Play both and you should see a difference. If not, try advancing each to the exact same frame and compare side by side. For example, compare time 13712 in the H.263 to the same time in the H.264. (JavaScript and QuickTime plug-in required; tested on Mac Safari and Firefox)

And remember, that’s setting aside one factor that makes the quality difference even more striking: YouTube blows up the videos by about a third (to 480×360), so each of those blurry pixels gets stretched. Full-screen is even harder on the eyes.

So, does the video codec matter? Of course. I’ve been in such a mindset that bandwidth can compensate for codec differences that it’s really a bit of a wake-up to see just how much of a difference it makes in this little demo.