Rss

Archives for : java

Did Open Source Kill Sun?

Some in the Java community are linking to Sun Chairman and Co-Founder Scott McNealy’s comments in the Oracle OpenWorld keynote, in which he wistfully looks back at the soon-to-be-gone Sun and boasts that they:

Kicked butt, had fun, didn’t cheat, loved our customers, changed computing forever

Sorry to bust the warm fuzzies, but we should append that history with a few more words:

and failed anyways

Sun lost money for most of this decade, as its stock fell more than 95%, reaching the point late last year where its market valuation equaled its cash and investments, meaning the market considered the company’s business as having no value whatsoever.

As engineers, we can romanticize Sun’s “good guy” behavior over fine craft beers all night, but at the end of the day, the company ceased to be viable, destroying a great deal of wealth in the process. Sometimes, it seemed like Sun wanted to be the non-profit FSF instead of a publicly-traded company. At least they got the “non-profit” part right.

And clearly understanding Sun’s failure matters because the kinds of things that Sun did are now going to be considered liabilities. Sun tried like crazy to win over the open source community. The community demanded that Sun support Linux, even though Sun would presumably favor its own flavor of Unix, Solaris. But they went along with it… giving companies a reason not to buy Sun hardware and instead lash together cheap Linux boxes, or buy heavy iron from Linux-loving Sun rival IBM. The community demanded that Java be open sourced and, after a series of fits and starts, it finally was, with the ultra-hippie GPL license no less. Ultimately, the community came to believe it had a blank check written against Sun’s engineering resources, as typified in the infamous “changing of the guard” episode of the JavaPosse and the somewhat testy reply.

But what did all these giveaways accomplish? The next time a community goads a company into open sourcing its crown jewels, the critical response may well be “yeah, that worked great for Sun.” In fact, that was pretty much Fake Steve’s take on Sun over the course of Sun’s decline, mocking the company’s giveaways, as it frittered into irrelevance. At the end of the day, how is FSJ not right on this one?

It’s ironic that Sun’s love of the open source community was largely unrequited. As late as 2007, Slashdot founder Rob “CmdrTaco” Malda was still expressing his eternal hatred of Java, and even in GPL form, Java has been slow to win acceptance from the F/OSS types. In an even more ironic twist, Slashdot’s tone has softened lately. For example, a recent article on Android game development quoted its source as saying “While iPhone apps are written in Objective C, the Android SDK uses relatively more programmer-friendly Java.” Why the sudden love for Java? Because it powers Android, the most plausible rival to the iPhone, now telephona non grata to the Slashdot community. In other words, the enemy of my enemy is my friend.

Too late for Sun though, and it’s not clear that a greater acceptance from a community that, by definition, doesn’t like to pay for stuff would even matter anyways. Perhaps the takeaway is that we all need a more realistic attitude about what individuals and companies need to do to continue their existence. Charity is swell, but it’s not necessarily a viable business model.

What’s New, Blue Q?

One-time self-described “World’s Greatest Compressionist” Ben Waggoner posts a pointed question to the quicktime-api list:

http://www.apple.com/macosx/what-is-macosx/quicktime.html

What I’d like to know is if QuickTime X is going to be available for Windows and older versions of Mac OS X.

It’s an important issue, because despite iTunes’ insistence on installing QuickTime on Windows, the future of that product seems completely unknown. For years, every question I’ve seen about the future of QuickTime on Windows has been met with absolute silence from Apple. Yeah, I know, “Apple does not comment on unannounced products,” and all… Still, Apple has left this technology in limbo for a remarkably long time. I recall asking ADC reps about QuickTime for Windows back at Leopard Tech Day Atlanta in 2006, as I was considering calling it from Java with JNI, and (as previously noted), I got no reply at all. And every other public question I’ve seen about the future of QuickTime on Windows has gone similarly unanswered, for years.

Smell that? That’s the scent of Abandoned Code Rot. We got that from QuickTime for Java for a few years before they managed to finally deprecate it (though they apparently haven’t gotten the message out).

It wouldn’t be too surprising to see QT for Windows fall by the wayside… Apple probably cares more about the popularity of its favorite formats and codecs (AAC and H.264) than of the QuickTime APIs and QuickTime’s interactive features like Wired Sprites that have been clearly and unequivocally beaten by Flash.

But if that’s true of Windows, is it also true on the Mac? QuickTime developers are right to be a little worried. The old C-based QuickTime API remains a 32-bit only option, intended to be replaced by the Objective-C QTKit. But in the four years since its introduction in Tiger, QTKit has only taken on part of the capabilities of the old QuickTime API. With Leopard, you could finally do capture and some significant editing (e.g., inserting segments at the movie or track levels), but raw sample level data was unavailable for any track type other than video, and some of the more interesting track types (like effects and especially tweens, useful for fading an audio track’s volume between specific times) are effectively useless in QTKit.

With Snow Leopard, the big news isn’t a more capable QTKit API, it’s QuickTime X. And as Apple’s QuickTime X page points out, QTX is all about a highly-optimized playback path (using decode hardware if available) and polished presentation. Great news if you’re playing 1080p movies on your computer or living room PC, not so much if you want to edit them: if you want to edit anything, you’re back in the old 32-bit QuickTime (and the code is probably still written in C against the old APIs, given QTKit’s numerous limitations). You don’t see a 64-bit Final Cut Pro, now do you? (BTW, here’s a nice blog on that topic.)

When you all install Snow Leopard tomorrow and run the QTX-based QuickTime Player, you’ll immediately understand why the $30 QuickTime Pro (which bought you editing and exporting from the Player app and the plug-in) is gone. Follow up in the comments tomorrow (after the NDA drops) and we’ll discuss further.

If I were starting a major new multimedia project that wasn’t solely playback-based — imagine, say, a podcast studio that would combine the editing, exporting, and publishing tasks that you might currently perform with Garage Band, iTunes, and FTP — I would be very confused as to which technology to adopt. QuickTime’s cross-platform story seems to be finished (QTJ deprecated, QTW rotting away), and everything we hear on the Mac side is about playback. Would it be safer to assume that QuickTime doesn’t have a future as a media creation framework, and drop down to the engine level (Core Audio and Core Video)? And if not QuickTime… then what?

Oh, and as for the first question from the quicktime-api thread:

… How about Apple throwing us a bone as to what QuickTime X will offer those of us that use QT and QTSS?

From what I can tell, Apple has all but ditched QTSS in favor of HTTP Live Streaming, supported by QuickTime X and iPhone 3.0.

Ogg: The “Intelligent Design” of digital media

Well, another go ’round with this: HTML5 won’t mandate Ogg as universally-supported codecs, and the freetards are on a tear. I was going to follow up on a JavaPosse thread about this, but I hurled enough abuse onto their list last week.

It’s abundantly clear in this blog that I don’t think Ogg is the solution that its supporters want it to be: I have a whole tag for all the posts where I dismiss Vorbis, Theora, and friends. Among these reasons:

  • I don’t think it’s technically competitive.

  • It certainly isn’t competitive in terms of expertise and mindshare, which is vitally important in media codecs: there’s a much deeper pool of shared knowledge about the MPEG codecs, which leads to chip-level support, competition among encoders, compressionists who understand the formats and how to get the most out of them, etc.

  • Its IP status remains unclear. With even MPEG-4, following a lengthy and formal patent pooling process, attacked by AT&T’s claim of a submarine patent, I have no reason to think that Ogg wouldn’t face similar claims, legitimate or not, if there was any money behind it, which there isn’t.

  • If I go to my former colleagues at CNN or in Hollywood and say “you guys should use Ogg because…”, there are no words in the English language that plausibly complete the sentence and appeal to the rational self-interest of the other party.

On this last point, I’ve got an ugly analogy: just as proponents of “Intelligent Design” are people who don’t really care about biology beyond the point at which it intrudes on their religious belief, so too do I think Ogg advocates generally don’t know much about media, but have become interested because the success of patent-encumbered formats and codecs is an affront to their open-source religion.

Ogg’s value is in its compatibility with the open source religion. It has little to offer beyond that, so it’s no surprise that it has zero traction outside of the Linux zealot community. Even ESR realized that continually shouting “everything should be in Ogg” was a losing strategy, and he said that three years ago.

I think the open source community would like to use HTML5 to force Ogg on the web community, but it’s not going to work. As others have pointed out, there’s little reason to think that IE will ever support HTML5. Even if they do, the <video> tag is not going to replace Flash or Silverlight plug-ins for video. Despite my initial enthusiasm for the <video> tag commoditizing video, I see nothing in the spec that would support DRM, and it’s hard to imagine Big Content putting their stuff on web pages without DRM anytime soon. And while you can put multiple media files in a <video> tag easily enough, having to encode/transcode to multiple formats is one reason that Big Content moved away from the Real/WMP/QuickTime switch to the relative simplicity of works-for-everyone Flash.

I’m tired of being lectured by computer people about media; it’s as ludicrous as being lectured about computers by my old boss at Headlines. Just because you use YouTube, doesn’t make you an expert, any more than my knowing how to use a username and password means I understand security (seriously, I don’t, and doubt I ever will). Kirill Grouchnikov pretty much nailed what computer people think good video is with this tweet. I’ll add this: there are probably a thousand people at Sun who understand 3D transformations in OpenGL, and maybe five who know what an Edit Decision List is. So they go with what they know.

A couple years back, I gave a JavaOne BoF in which I made a renewed call for a Java Media library which would support sample-level access and some level of editing, arguing that enabling users to take control of their own media was a manifest requirement of future media frameworks. By a show of hands, most of the developers in the audience thought it would be “enough” to just support playback of some modern codecs. JavaFX now provides exactly that. Happy now?

People who actually work in media don’t mind paying for stuff, and don’t mind not owning/sharing the IP. Video production professionals are so accustomed to standardizing on commercial products, many of them become generic nouns in industry jargon: “chyron” for character generators, “grass valley” for switchers, “teleprompters”, “betacam” tape, etc. Non-free is not a problem here. And if your argument for open-source is “you’re free to fix it if it doesn’t do what you want it to,” the person who has 48 shows a day to produce is going to rightly ask “why would I use something that doesn’t work right on day one?”

The open source community doesn’t get media. Moreover, it doesn’t get that it doesn’t get media. The Ogg codecs placate the true believers, and that’s the extent of their value.

Happy ‘Cause I’m Staying Home

Any of the last five years, I’d be out in San Francisco at this point for JavaOne. It came with the job of editing java.net and ONJava, but truth be told, I loathed the conference.

When I left Java, I figured I would likely use this week to rail against JavaOne’s indulgences and excesses: the idea that there are Sun developers who work months on demos for the conference (anything that takes that long should be a shipping product), the over-emphasis on the one-big-splash conference that gets a little tech press instead of outreach efforts like Sun Tech Days that get out to developers in the field, and of course, the vaporous announcements. The whiny little bitch contingent of developer-dom won’t let go of Steve Jobs’ vow to make the Mac the best platform for Java (this was before Java off the server fell into utter irrelevance), but overlook all the other JavaOne keynote highlights that either never came out (Java for the PS2, Java for the Infinium Phantom console, Visual Basic as a JVM language) or quietly faded into the Where Are They Now folder (the Looking Glass desktop, the “Wonderland” knock-off of Second Life, etc.). The other night, I wondered whatever happened to the Neil Young Blu-Ray project that was hyped at last year’s JavaOne, and it turns out it comes out tomorrow (coincident with the JavaOne keynote… though it might seem like it was held for the keynote, it seems to have been delayed by a rushed CD about the financial crisis released earlier this year).

If asked last year what I thought would most help Java, I would have said that ending JavaOne (and throwing the resources into Sun Tech Days instead) would be a great start.

Thing is, with Oracle’s purchase of Sun, that kind of argument has become moot. A lot of people are openly asking if this is the last JavaOne, not because of any revelations about the uselessness and self-indulgence of the conference, but because some expect Oracle to ruthlessly dismantle and assimilate Sun, as it has with previous acquisitions (is there any trace of BEA left in the world?).

Still, there’s much to learn from a conference too big and too unfocused. I once debated JavaOne with the Java Posse‘s Dick Wall, saying that the keynotes were a train wreck, and the tech sessions either corporate or just tedious. He countered that the best part of the conference was not the formal content, but the assembly of people, and the hallway conversations. I thought that was an extraordinary concession. In fact, today, I’ll go a step farther and say that if hallway conversations are the best part of your conference, then your conference sucks. In fact, I’ll make that the third of my laws.

Does this argue against the all-talk format of unconferences and bar camps? No, because there the conversations are the conference, though I find that format is better suited to sharing opinions than technical knowledge (which may be why unconferences are so well-suited to Java and all its politics and drama). I tweeted this latter opinion to Kathy Sierra, who acked back with a suggestion that “I think it might be time now for less *camp & more *jam… (people get together to create/do.” Daniel Steinberg made the same point, pining for a return to the get-together-and-code format of the late, great MacHack.

I agree with Kathy and Daniel, but I’ll note that it seems to work only if the coding is the point of the conference. Year after year, attempts were made to add on-site coding contests to JavaOne (robotics in our java.net booth, slot cars over in real-time Java), and nobody took the bait. It works as a one-day precursor to a conference, or as the conference itself, but shoehorning code jams into traditional conferences doesn’t seem to work.

So, I’m not going to miss JavaOne if this is indeed the end, but I hope that something better replaces it. Just trying to fold it into a larger omnibus Oracle event isn’t going to do anything for anybody… unless the community goes the DIY route and launches its own conferences, with different formats, smaller focuses, and locations out where the developers are, rather than summoning everyone to the gloomy catacombs of Moscone North and South. Guess we can hope.

Summer Plans

Now that I get to skip JavaOne for the first time in five years (more on that in a couple weeks), I have a short conference schedule for this summer.

  • Apple WWDC – June 8-12 – Expensive, but so worth it. The nature of the Mac and iPhone development community is, honestly, that of a Cargo Cult: it’s primarily driven by Apple’s decisions and announcements (and I don’t think that’s a bad thing; inclusion and community sounds great in theory, but sometimes the result is a four-year pissing match over closures in Java, or a competition of multiple awful Linux desktop environments, each awful in its own special way). So attendees get an advantage by having direct access to the essential APIs and frameworks, both in the form of sessions and labs with the engineers. There’s a lot in QuickTime and Core Audio that seems to come out of the blue, but you get the thinking behind it when the Apple guys present it in a session.

    There’s also a lot of information here that seems to never get out to the public. For example, last year’s Media and Graphics State of the Union announced the deprecation of QuickTime for Java, but no public announcement was ever made, and the QTJ home page, while dated, still goads developers into adopting it.

  • iPhone Camp Atlanta 2009 – July 18 – It’s a heck of a drive, but we moved out of Atlanta just last Fall, and the sale of our house there is finally closing three days before, so this half-day unconference affords a chance to pick up any paperwork or forgotten personal effects, to say nothing of meeting up with other iPhone devs. I proposed via Twitter a session on low-level Core Audio, something I’ve had my head a lot in this Spring.

    Right now, there are over 100 registered attendees, though I’d be surprised if this many show up (people will always register for something free, then half will flake the day of… I would have had a nominal [$25-50] registration fee just to weed out the flakes).

I’m also thinking about using one of the whiteboards at WWDC to propose the idea of a Core Audio unconference somewhere. A lot of people are digging into CA on the iPhone (probably out of necessity… in 2.0, the Audio Queue was the only way to play a flat audio file, and as of 2.2, recording still requires AQ [or audio units]), at different levels of experience and ambition. Maybe it makes sense to get together somewhere for a few days, share notes, and bang on code. We’ll see if anyone bites.

Audio latency < frame rate

So here’s what I’m working on at the moment, from an article for mystery client who I hope to reveal soon:

really-low-latency

Yep, if latency is buffer duration + inherent hardware latency, then this app needs just 29 ms to get the last sample in its buffer out the headphones or speaker. And really not that hard once you reacquaint your mind with old fashioned C.

A world of difference from the awful time I had on Swing Hacks trying to use, explain, and justify javax.media.sound.sampled, which has a push metaphor, requiring you shove samples into an opaque buffer whose size is unknown and unknowable. To say nothing of Java Sound’s no-op’ed DataLine.getLevel(). Analogy: Core Audio is The Who. Java Sound is The Archies.

Next up, working through the errata on iPhone SDK Development, now that we have a first full draft in tech review.

And you will know us by the trail of crash logs…

The last few weeks have been largely spent in Core Audio, which is surely coloring my perception of the iPhone SDK. It’s interesting talking to Daniel — author of the Prags’ Cocoa book as well as my editor on their iPhone title — as he’s working with Cocoa’s high-level abstractions, like the wonderful KVC/KVO, while I’m working at an extremely low level, down in Core Audio.

There’s no question it’s colored my perception of the iPhone SDK, to have spent pretty much a month doing mostly C between the streaming media chapter for the book and a Core Audio article for someone else. Over at O’Reilly, I blogged about the suprising primacy of C for serious iPhone development, and the challenges that presents for a generation that knows only C’s cleaned-up successors, like Java and C# (to say nothing of the various scripting languages). At least one of the responses exhorted readers to grow a pair and read K&R, but the more I’ve thought about it, the more I think that may be a bad suggestion. K&R was written for the Unix systems programmer of the late 70’s and early 80’s. It doesn’t cover C99, and many of the standards of that time are surely out of date (for example, why learn 8-bit ASCII null-terminated strings, when the iPhone and Mac programmer should be using Unicode-friendly NSStrings or CFStringRefs). This is an interesting problem, one which I’ll have more to say about later…

The streaming media chapter clocks in around 35 pages. Daniel wondered if it might be too inclusive, but I think the length just comes from the nature of Core Audio: involved and verbose. The chapter really only addresses three tasks: recording with an Audio Queue, playing with an Audio Queue (which is less important now that we have AVAudioPlayer, but which is still needed for playing anything other than local files), and converting between formats. On the latter, there’s been precious little written in the public eye: Googling for ExtAudioFileCreateWithURL produces a whopping 16 unique hits. Still, there’s a risk that this chapter is too involved and of use to too few people… it’ll be in the next beta and tech review, but it might not suit the overall goals of the book. If we cut it, I’ll probably look to repurpose it somehow (maybe I can pitch the “Big Ass Mac and iPhone Media book” dream project, the one that covers Core Audio, Core Video, and QTKit).

The article goes lower than Audio Queue and Extended Audio Files, down to the RemoteIO audio unit, in order to get really low-latency audio. MIchael Tyson has a great blog on recording with RemoteIO, but for this example, I’m playing low-latency audio, by generating samples on the fly (I actually reused some sine wave code from the QuickTime for Java book, though that example wrote samples to a file whereas this one fills a 1 KB buffer for immediate playback).

Amusingly, after switching from easy-to-compute square waves to nicer sounding sine waves, I couldn’t figure out why I wasn’t getting sound… until I took out a logging statement and it started working. Presumably, the expense of the logging caused me to miss the audio unit’s deadlines.

Working at this level has me rethinking whether a media API of this richness and power could ever have worked in Java. It’s not just Sun’s material disinterest and lack of credibility in media, it’s also the fact that latency is death at the low levels that I’m working in right now, and there’s no user who would understand why their audio had pauses and dropouts because the VM needed to take a break for garbage collection. If Java ever did get serious about low-latency media, would we have to assume use of real-time Java?

I’m amazed I haven’t had more memory related crashes than I have. I felt dirty using pointer math to fill a buffer with samples, but it works, and that’s the right approach for the job and the idioms of C and Core Audio. After a month of mostly C, I think I’m getting comfortable with it again. After I struggled with this stuff a year ago, it’s getting a lot easier. When I have time, maybe I’ll start over on the web radio client and actually get it working.

Next up: finishing the low-latency article, fixing an unthinkable number of errata on the book (I haven’t looked in a while, and I dread how much I’ll need to fix), then onto AU Graph Services and mixing.

My emerging mental media taxonomy

Back when we did the iPhone discussion on Late Night Cocoa, I made a point of distinguishing the iPhone’s media frameworks, specifically Core Audio and friends (Audio Queue Services, Audio Session, etc.), from “document-based” media frameworks like QuickTime.

This reflects some thinking I’ve been doing over the last few months, and I don’t think I’m done, but it does reflect a significant change in how I see things and invalidates some of what I’ve written in the past.

Let me explain the breakdown. In the past, I saw a dichotomy between simple media playback frameworks, and those that could do more: mix, record, edit, etc. While there are lots of media frameworks that could enlighten me (I’m admittedly pretty ignorant of both Flash and the Windows’ media frameworks), I’m now organizing things into three general classes of media framework:

  • Playback-only – this is what a lot of people expect when they first envision a media framework: they’ve got some kind of audio or audio/video source and they just care about rendering to screen and speakers. As generally implemented, the source is generally opaque, so you don’t have to care about the contents of the “thing” you’re playing (AVI vs. MOV? MP3 vs. AAC? DKDC!), but you also can’t generally do anything with the source other than play it. Your control may be limited to play (perhaps at a variable rate), stop, jump to a time, etc.

  • Stream-based – In this kind of API, you see the media as a stream of data, meaning that you act on the media as it’s being processed or played. You generally get the ability to mix multiple streams, and add your own custom processing, with the caveat that you’re usually acting in realtime, so anything you do has to finish quickly for fear you’ll drop frames. It makes a lot of sense to think of audio this way, and this model fits two APIs I’ve done significant work with: Java Sound and Core Audio. Conceptually, video can be handled the same way: you can have a stream of A/V data that can be composited, effected, etc. Java Media Framework wanted to be this kind of API, but it didn’t really stick. I suspect there are other examples of this that work; the Slashdot story NVIDIA Releases New Video API For Linux describes a stream-based video API in much the same terms: ‘The Video Decode and Presentation API for Unix (VDPAU) provides a complete solution for decoding, post-processing, compositing, and displaying compressed or uncompressed video streams. These video streams may be combined (composited) with bitmap content, to implement OSDs and other application user interfaces.’.

  • Document-based – No surprise, in this case I’m thinking of QuickTime, though I strongly suspect that a Flash presentation uses the same model. In this model, you use a static representation of media streams and their relationships to one another: rather than mixing live at playback time, you put information about the mix into the media document (this audio stream is this loud and panned this far to the left, that video stream is transformed with this matrix and here’s its layer number in the Z-axis), and then a playback engine applies that mix at playback time. The fact that so few people have worked with such a thing recalls my example of people who try to do video overlays by trying to hack QuickTime’s render pipeline rather than just authoring a multi-layer movie like an end-user would.

I used to insist that Java needed a media API that supported the concept of “media in a stopped state”… clearly that spoke to my bias towards document-based frameworks, specifically QuickTime. Having reached this mental three-way split, I can see that a sufficiently capable stream-based media API would be powerful enough to be interesting. If you had to have a document-based API, you could write one that would then use the stream API as its playback/recording engine. Indeed, this is how things are on the iPhone for audio: the APIs offer deep opportunities for mixing audio streams and for recording, but doing something like audio editing would be a highly DIY option (you’d basically need to store edits, mix data, etc., and then perform that by calling the audio APIs to play the selected file segments, mixed as described, etc.).

But I don’t think it’s enough anymore to have a playback-only API, at least on the desktop, for the simple reason that HTML5 and the <video> tag commoditizes video playback. On JavaPosse #217, the guys were impressed by a blog claiming that a JavaFX media player had been written in just 15 lines. I submit that it should take zero lines to write a JavaFX media player: since JavaFX uses WebKit, and WebKit supports the HTML5 <video> tag (at least on Mac and Windows), then you should be able to score video playback by just putting a web view and an appropriate bit of HTML5 in your app.

One other thing that grates on me is the maxim that playback is what matters the most because that’s all that the majority of media apps are going to use. You sort of see this thinking in QTKit, the new Cocoa API for QuickTime, which currently offers very limited access to QuickTime movies as documents: you can cut/copy/paste, but you can’t access samples directly, insert modifier tracks like effects and tweens, etc.

Sure, 95% of media apps are only going to use playback, but most of them are trivial anyways. If we have 99 hobbyist imitations of WinAmp and YouTube for every one Final Cut, does that justify ignoring the editing APIs? Does it really help the platform to optimize the API for the trivialities? They can just embed WebKit, after all, so make sure that playback story is solid for WebKit views, and then please Apple, give us grownups segment-level editing already!

So, anyways, that’s the mental model I’m currently working with: playback, stream, document. I’ll try to make this distinction clear in future discussions of real and hypothetical media APIs. Thanks for indulging the big think, if anyone’s actually reading.

Link: Time to standardize on H.264?

Editor’s Note from the latest issue of Streaming Media asks if it’s time to standardize on H.264 for online video:

Why have competing video formats at all? That question has long seemed polyannaish to those on the Streaming Media lists who are invested heavily in one proprietary technology or another, but now that Microsoft Silverlight has finally joined Adobe in supporting H.264 playback—QuickTime and RealPlayer were ahead of the game on this one—our industry needs to evaluate whether or not it’s time to agree upon H.264 as the standard for all online video.

Actually, this is news to me that Silverlight now supports H.264, but I don’t track the Microsoft technologies all that closely (I know, but hours in the day, better things to do, etc…). But it probably helps a large group of their users who are already interested in H.264 for other reasons, and who might not adopt Silverlight as a delivery platform if it insisted on using MS video (as far as I know, the MS stuff is fine, just different). And it just helps to further reinforce H.264’s virtuous circles: more potential clients means more companies working on encoders (the by-design competitive half of the MPEG standards), which means higher quality at lower bitrates, which makes the codec even more appealing, so more people adopt it, and so on.

Come to think of it, it would be interesting to know if the Silverlight-based Netflix on Demand for Mac is using H.264 or VC-1 or some other MS codec. BTW, the Netflix deal probably gives real legitimacy to Silverlight as a cross-platform technology in the eyes of a lot of Mac users. At the end of the day, they don’t want to be denied content because of their platform choice and if Microsoft (of all people!) can help that, then so be it.

Left out in the cold, unsurprisingly, is JavaFX and Sun’s typically bizarre choice of the On2 no-name codec. I’ve bashed this before, and I assume that the end-of-the-day reason is that Sun just doesn’t have the money to license H.264, but now with both Flash and Silverlight supporting H.264, JavaFX is even more of an odd man out. Apparently, the premise here is that there really is an audience out there for a Flash-workalike that uses its own weird language, its own weird plug-in, its own weird codec, etc…

The other conspicuously missing Java platform

Sony announces they’re removing the PlayStation 2 content approval process, thereby making PS2 effectively an open platform.

Do you suppose this will hasten Java for PlayStation 2, promised in the JavaOne 2001 keynote?

No, of course not, but it’s fun to recall this among other J1 vaporware — anyone remember the 2004 announcement of Java for the hated and vaporous Infinium Phantom console? — as a counter to whiny little bitches who can’t get over Steve Jobs’ arguably unkept vow to make the Mac the best Java platform. Seriously, kids, half of what gets announced in keynotes never ships… get over yourselves already.

And not needing any comment from me (because the forum’s already hopping): [FYI] Sun stopped funding of SwingX.