Archives for : February2008

The rush to Blu-Ray

With Toshiba’s decision to abandon HD-DVD, the HD disc war ends with a victory for Blu-Ray. Lost in the “horse race” style reporting, however, is the fascinating truth behind the breakthroughs that led to this endgame.

The little-reported problem is that sales of regular DVDs actually declined in 2007, and are expected to fall further in 2008. Warner cited that as a reason for picking Blu-Ray, with an eye to ending the format war.

“We saw evidence that the format war was actually hurting standard definition,” [Warner Home Entertainment president Kevin] Tsujihara said. “The industry had very high expectations for the fourth quarter. The summer was the highest box office quarter in history. We ended up the year somewhere down 2 percent or a little bit more than 2 percent. That was a little disappointing, given the summer we had.”

As FORTUNE summarizes, “Consumers who bought HDTVs were so afraid of backing the wrong high-definition movie format that they decided not to buy movies at all.” And isn’t that a fascinating side-effect?

As a curious and highly debatable aside, Warner’s also claims high gas prices drove down DVD sales.

The other factor that seemingly has to come into effect here is that the studios have now released most of their back catalog, at least the viable parts of it, on DVD already. It’s genuinely hard to come up with a movie that’s not on DVD, harder still if it’s something that could actually make money (no fair saying The Day The Clown Cried). The Disney animated features, Star Wars, the Godfathers, and everything else that matters is already out, and you only get to make that money once, absurd repackaging notwithstanding. DVD also created a new market in television box sets, something that wasn’t practical with VHS (I once had two grocery sacks that contained episodes 1-60 of Robotech, two episodes per tape). But from here on out, the revenue potential for DVD seems to be limited to just the new movies that get home versions a few months after their theatrical runs.

Now here’s what I’m waiting to see. What’s the appetite going to be for buying all those movies again, in high-def? Particularly with upconverting players making standard-def discs look “pretty nice” on HDTV? Even though Blu-Ray’s quality is undeniably better than DVD, will it be enough to get people to buy entirely new players and software just a few years after adopting DVD? Blu-Ray offers more opportunities for extras, particularly given the capabilities of BD-J and the presence of internet connectivity in newer players, but is any software making genuine use of those features yet?

Maybe Blu-Ray’s competition was never HD-DVD, or even digital downloads (though those may take off… it’s just too early to tell). The real rival is actually the old DVD.

A feature request for Leopard stacks

I’m pretty conservative with new OS X UI features, tending to think a lot of them come from “change for change’s sake” thinking. But sometimes, I look at my workflow needs and realize that sometimes, a new feature is exactly what I need to be more efficient.

Or at least would be, if they’d implemented it better.

For my editing work, I have some text clippings on my desktop that I drop into e-mails as appropriate: one for the format of an article proposal, another with terms and payment info, one explaining where to find my feedback in HTML comments, etc. Right now, they’re arranged on the left side of my desktop:

jn clippings on desktop

This kind of sucks, because they get covered up by other windows (jIRCii, mostly), meaning I have to play a little game of window management to get to them and drg them into outgoing e-mails.

It occurred to me that Leopard Stacks could be a nice way to clean these up. I’d just put them all in a folder, then put that on the dock, so they’d be out of the way and only pop out when I need them, without requiring me to play shuffleboard with my windows or navigate through the filesystem. So, easy enough to try:

jn text clippings as stack

Except that when I drag one into Mail, instead of inserting the text into the message (as dragging from the Finder would do), it ends up as an attachment to the message:

jn text clipping attached to e-mail

…which is crap at best, because the whole point of text clippings is that they’re not typically treated as files (except by the Finder), and instead want to be drag-and-droppable text.

Is this really worth filing as an RFE on My stuff never gets fixed, but still, this works in a way nobody would actually want, right?

An MPEG Streamclip bonus

Interesting: MPEG Streamclip lets you get video and sound out of MPEG-1 and MPEG-2 files when exporting to a QuickTime movie. Let me explain why this is a fairly big deal.

MPEG media (and we’re talking -1 and -2 here, not -4, which is a totally different creature) has always been kind of problematic for QuickTime. Most MPEG-1 and MPEG-2 files are muxed, meaning that audio and video samples are in the same stream. No big deal, conceptually… most movie files are distributed that way to make them practical to read off the disk and decode at playback time. The problem is that when you open an MPEG file in QuickTime, it doesn’t demux the video and audio into separate streams; it treats the whole thing as this (arguably half-assed) media type called “MPEG media” (as opposed to “audio media” and “video media”). And it can play that media, but it can’t do much else with it. MPEG media is such a weird special case, that when you use the QuickTime API in an application, instead of looking in your imported movies for audio or video tracks (or more accurately, tracks whose media is of type SoundMediaType or VideoMediaType), you often have to look for media with the VisualMediaCharacteristic or AudioMediaCharacteristic, as MPEG media will match both (more details in a very old technote: TN1087, and in a Google Books page from my QTJ book).

This becomes obvious when you try to export your MPEG-1 or MPEG-2 file to another format, and QuickTime disables the sound portion of the export dialog:

QT pro export dialog for MPEG-1 file

This ends up being really frustrating, because this is the default behavior that you get from MovieExportDoUserDialog.

Somehow, it didn’t really register at first what was so weird about MPEG Streamclip’s export to QuickTime dialog, until I realized that this app is providing its own export dialog:

MPEG Streamclip export dialog for MPEG-1

Aside from some interesting options for deinterlacing, the significant thing here is that you get both video and audio, though not with the complete set of QuickTime audio codecs. Presumably, instead of handing off to QuickTime’s MovieExportToFile function, MPEG Streamclip is reading each source sample, optionally doing some graphics work (zoom, crop, rotate, de-interlace), and then encoding each sample into a new .mov container…

…which is fricking awesome, because now you can get MPEG-1’s onto your iPod with a little work, though at this point I find I’m still making an intermediate movie file and then letting QuickTime Pro do an “export to iPod” (iPod video is a specific set of MPEG-4 bitrates and options [specifically the low-complexity H.264 baseline profile… see the iPod tech specs], one I haven’t tried to duplicate in QT Pro or other encoding apps).

You know what would be even more awesome? If MPEG Streamclip were AppleScript-able, so you could automate conversion of bunch of MPEG files. Still, it’s nice to be able to get these files out of MPEG-1 at all.

Commoditizing embedded video: the HTML5 video tag

Surfin’ Safari notes initial support for the HTML5 <video> and <audio> tags in their latest nightly builds.

Indeed, if you have a browser that supports the video tag, then you (hopefully) can see an autoplaying video here:

There’s been a little bit of controversy over the fact that calls for inclusion of the Ogg formats have been removed in more recent versions of the spec. Section, “Video and audio codecs for video elements”, currently reads:

It would be helpful for interoperability if all browsers could support the same codecs. However, there are no known codecs that satisfy all the current players: we need a codec that is known to not require per-unit or per-distributor licensing, that is compatible with the open source development model, that is of sufficient quality as to be usable, and that is not an additional submarine patent risk for large companies. This is an ongoing issue and this section will be updated once more information is available.

That more or less matches my take on Ogg, which is that it poses an unknown patent liability risk: the /. mob insists it’s patent-free, but how the hell do they know? They don’t; they just want it to be so, because it suits their worldview. And Ogg may indeed be patent-free, but I don’t think anybody knows for sure, and even so, proving it would be expensive. To top it all off, Ogg just isn’t that popular or useful outside the warm bubble of Linux zealotry.

Still, there’s a huge need for at least one video and audio codec to be available more less everywhere, or at least for one class of devices: i.e., one codec you can expect all desktops to have, one for all phones, etc. To just dump out to “whatever QuickTime supports on the Mac, whatever Windows Media supports on Windows, etc.” ends up moving the problem, either to the web author (who has to sniff the OS from the user-agent and write the tag on the fly… to say nothing of hosting multiple encodings of every clip) or to the end-user.

It’s funny, because while the HTML5 <video> tag should displace Flash as the only practical option for web video — something that’s become screamingly obvious in the two years or so since YouTube launched — it might not, if it gets tangled up in codec hassles. The remarkable thing about Flash Video isn’t that it’s good (it’s not), but that it’s consistent and available on all Flash-enabled desktops.

That’s turned out to be a much bigger deal than the quality of competitors like H.264 and WMV, or the fact that other approaches could support many more codecs. Flash doesn’t try to support every codec under the sun, or even offer extension points for third parties to do so, but it doesn’t matter — with a known-viable video codec, content providers can just push their content with the package-deal of FLV and the Flash plug-in. Sure, the QuickTime plugin, a QuickTime for Java applet (or even a JMF applet, fercryinoutloud) could support more formats and codecs than Flash, but the typical use is not a general-purpose “play arbitrary content” application; the web-embedded player is usually meant to play the content from a single content provider, who’s perfectly happy to use a single, sub-optimal format, if the alternative is having to encode everything a dozen ways from Sunday to support the various OSes and devices.

Which makes me think that Flash’s ubiquity as a web-embedded video player won’t be threatened by HTML5, so long as there is neither a de jure nor de facto ubiquitous video codec for HTML5. Ironically, while H.264 might be the best candidate for that, Flash is already supporting it too.

Which leads me to an idea: if you were writing a browser on a platform without H.264 support from the native multimedia library, but you had Flash available, could you just pull the Flash player into service on the fly and have it play the H.264?

Congress’ IP double-standard

Noted this week: Sen.Arlen Specter’s (R-PA) bill to exempt churches that hold mass Super Bowl viewings from copyright law. From the Broadcasting and Cable write:

Specter conceded that a strict reading of the law makes those exhibitions, and virtually any large group, a violation, but also pointed out that there is an exemption for those local bars and food establishments and that he thinks there should be one for religious establishments, as well.

Specter himself is quoted as saying:

“In a time when our country is divided by war and anxious about a fluctuating economy, these type of events give people a reason to come together in the spirit of camaraderie.”

Surely I’m not the only one to see the double-standard here. While the legality of $9,250-per-song damages for copying music has been upheld by the Justice department and not questioned by Congress, this bill would make it OK for a politically-favored group — organized religion — to get a pass on copyright law and violate the NFL’s IP rights over use of the broadcast.

So why stop at the Super Bowl parties? Should church youth groups be allowed to set up file-trading rings for their favorite music and movies on the church’s servers, “in the spirit of camaraderie”?

Funny how important it is for politicians to hold a strict line on copyright… until it isn’t.

Arsenic and old interlace

Rather than continue to post comments to my previous blog about trying to rip a DVD, de-interlace it, and convert it to an editing-friendly codec, I’m posting the followup as its own message.

The overnight re-encode with JES Interlacer failed just like the previous attempt that made the 40GB file: it got stuck on one frame of the complete video track and padded out the last 40% of the movie with that. At least with a fixed data rate of 3000 Kbps, the broken file wasn’t 40 frickin’ gig…

So then I opened the full-length video track m2v file with QuickTime Pro (which pinwheeled for like 10 minutes) and, noticing that the Pixlet export dialog had a “deinterlace” checkbox, tried using that for my export.

QTPro export and transcode

Unfortunately, it too ended up getting stuck on the same frame.

So, plan D (or was I on “E” by this point?) was to go back to MPEG Streamclip, open the demuxed m2v file (which, remember, was created by Streamclip in the first place, from all the VOB files) and do the “export to QuickTime” from there, again depending on the Pixlet exporter to handle the deinterlace. The 3000 Kbps export from before looked like ass, so I went up to 10 Mbps.

MPEG Streamclip export to Pixlet

Ah, finally! After a three-hour encode, I finally got the whole thing exported to Pixlet, with deinterlace:

Exported DVD rip with Pixlet video

The file is about 10 GB, and there are still some artifacts on a few high-contrast places (e.g., panning across dense black-on-white text). However, it scrubs like a dream, something you don’t get with a codec meant for playback, like H.264. We tend to forget how you need different codecs for different reasons, like whether you can afford asymmetry in encoding and decoding (e.g., a movie disc that is encoded once, and played millions of times, is a scenario which tolerates a slow and expensive encode so long as the decode is fast and cheap). In the case of editing, you want codecs that allow you to access frames quickly and cheaply in either direction, which tends to rule out temporal compression. The Ishtori tutorial’s editing codecs page recommends the RLE-based Animation codec, Pixlet, Photo-JPEG, or the commercial SheerVideo codec.

I might encode at an even higher bitrate for my second AMV project, but for now, I’m not super keen on burning up the last 100 GB of my drive space, so I’m OK with the current quality/size tradeoff.

Now to rip discs 2-5 of His and Her Circumstances and either start storyboarding or at least writing out a rudimentary shot sheet (smart) or throwing down edits like I know what I’m doing (dumb… but probably what I’ll do, since this is more experimental than anything).

Adventures in deinterlacing…

So, a while back, I mentioned wanting to edit an AMV. I started laying the groundwork for that today, and so far, it’s an uphill climb. I’m working from’s guide to making AMVs on the Mac, a helpful reference since most of the easily-found AMV guides are for Windows users.

So far, though, just ripping the DVDs in an editing-friendly format is a struggle. I had originally thought it would be as simple as going through Handbrake and making sure to account for interlacing. Problem is, Handbrake is far more inclined to give you a playback-oriented transcode (e.g., H.264), than something amenable to scrubbing in Final Cut. Ishitori’s guide suggests using MacTheRipper to de-CSS the files and get plain ol’ VOBs on your hard drive. Check. To do anything with them, of course, you need the QuickTime MPEG-2 Component, which I had a copy of like 8 years ago at Pathfire, but ended up re-buying today.

The next couple steps involve getting the footage in shape for Final Cut. That means:

  • Deinterlace
  • Convert to square pixels
  • Demux (actually, AMVs generally only need the video track
  • Transcode to Motion-JPEG, Pixlet, or some other editing codec

Ishitori suggests using Avidemux for deinterlacing and general image filtering, but I found both the X11 and Qt versions to be completely unusable. The X11 version won’t open a file unless you run it as root from the command line, and even then it seems to mis-read its plugin files. The Qt version just crashes a lot, and can’t work with any drive other than the boot volume (hilarious). Tried building from the latest sources in subversion, but that failed too, and I wasn’t really inclined to go on a wild dependency chase.

Plan B: I found JEI Deinterlacer. Cool! Oh wait, it won’t read later VOB segments from a long rip. Not so cool, unless you only want to deinterlace the menus and not the main program.

Plan C: Use MPEG Streamclip to pull all the VOBs together, and demux the entire video stream into another file.

OK, this might work…
MPEG Streamclip demux preview

Takes about 5 minutes…
MPEG Streamclip progress

Next, open it in JES Deinterlacer. Not the most intuitive GUI ever, but all these video tools have a million options and read like a brick.

JES Deinterlacer open panel

Downside #1: JES Deinterlacer pegs the CPU and puts up a SPOD (“spinning pinwheel of death”) for about 10 minutes while opening the file.

Anyways, JES Deinterlacer can transcode as you go, so I figure I’ll save a step and export my deinterlaced video to Pixlet.

JES Deinterlacer progress

Only two problems here: first, I didn’t set a bitrate and figured I’d let QuickTime decide. That defaulted to the highest possible quality (about 30 Mbps) and a 40 GB file. Kind of overkill, considering the source was 5 Mbps MPEG-2. Other problem is that at some point about 2/3 of the way through the file, it got stuck on one image and encoded the rest of the file with that same image. Niiiice.

Anyways, I’m letting it run again with a saner output bitrate for Pixlet (6 Mbps, which should give me a 6-7 GB file), and hoping that it doesn’t get locked on one frame again. It started about an hour ago, and looks to be about 25% done (on the dual 1.8 G5… would be interesting to see how it performs on a Core 2 Duo).

So, maybe I’ll be ready to edit when I get up tomorrow, or maybe I’ll be really pissed off.

Speak to me…

So, one of Keagan’s favorite things at school is this text-to-speech on the special ed computers. It’s meant for seriously autistic kids who don’t speak (not one of Keagan’s problems), but then again, what kid doesn’t like text-to-speech? When I was helping teach computer camp courses on the TI 99-4/A during high school summer breaks, the easiest, highest-engagement part of the week was always the morning spent doing little one-off, two-line BASIC programs to use the 99’s Speech Synthesizer.

I went looking at our existing apps to see if it would be easy enough to tell Keagan to just type some text into Mariner Write and then do Edit -> Sound -> Speak All Text, but that seemed obtuse. I figured it wouldn’t be that hard to whip up my own trivial text-to-speech app for him.

It wasn’t. In fact, it took about 30 minutes, during which I also had to repeatedly get up and help Quinn color in Jakers in a little Flash app.

Anyways, the implementation file is like 50 lines total, so I suppose I can just dump it in here:

//  KeagySpeechController.m
//  KeagySpeech1
//  Created by Chris Adamson on 2/5/08.
//  Copyright 2008 Subsequently & Furthermore, Inc. All rights reserved.

#import "KeagySpeechController.h"

@implementation KeagySpeechController
- (void) handleSpeak:(id)sender {
	NSString *text = (NSString*) [[textView textStorage] string];
	[synth startSpeakingString: text];
	[stopButton setEnabled: YES];
	[speakButton setEnabled: NO];

- (void) handleStop:(id)sender {
	[synth stopSpeaking];

- (void)awakeFromNib {
	synth = [[NSSpeechSynthesizer alloc] init];
	[synth setDelegate: self];
	[speakButton setEnabled: YES];
	[stopButton setEnabled: NO];
	// select all text in the NSTextArea so it's typed over by default
	NSString *text = (NSString*) [[textView textStorage] string];
	NSRange textRange;
	textRange.location = 0;
	textRange.length = [text length];
	[textView setSelectedRange: textRange];

// NSSpeechSynthesizer delegate methods
- (void)speechSynthesizer:(NSSpeechSynthesizer *)sender didFinishSpeaking:
		(BOOL)finishedSpeaking {
	[speakButton setEnabled: YES];
	[stopButton setEnabled: NO];

// no need to provide implementations of these:
// - (void)speechSynthesizer:willSpeakWord:ofString:
// - (void)speechSynthesizer:WillSpeakPhoneme:


KeagySpeech1 screenshot

App icon:
KeagySpeech1 app icon


A few thoughts:

  • It was pretty easy because I didn’t need to stray far from the simplest examples shown in Apple’s Introduction to Speech
  • Handling the “speech finished” is a nice simple example of delegates. In Java, you’d have some interface to implement (like SpeechSynthesisListener) and you’d be obligated to implement all of its methods, no-op’ing the ones that are irrelevant to you. In Obj-C, you can just ignore the methods that don’t interest you, and rely on the right method being found at runtime. I suppose the downside is that if you misspell the callback method, nothing in the compiler is going to catch your mistake, and you’ll be left wondering why your method isn’t being called.
  • It’s good exercise to periodically start new XCode projects and wire up in IB. With the changes in Leopard, I sometimes forget that instead of creating the class in IB, I need to create it in XCode (at least the outlets and actions in the header), and then create a new NSObject in IB, setting its class to whatever I just created in XCode.
  • One thing I don’t like, coming from Java, is having to switch back and forth between OO and straight-C idioms. NSRange is not an object, it’s just a struct, so initializing it with values to set the NSTextField selection looks and feels completely different from working with real Obj-C objects. I understand why it’s this way — anything straight-C is also suitable for C++ and Carbon — and it’s probably pretty comfortable for those coming from C/C++ backgrounds But compared to Java’s syntax, it does feel kludgy.
  • That said, whipping up the GUI in IB is still unparalleled. The Matisse editor in NetBeans is quite good, and probably the only practical choice for building Java GUIs, but it’s still not nearly as nice.

Anyways, I gotta get the kids to bed in 15 minutes, so I hope I covered everything… guess we’ll find out tomorrow if Keagan’s actually willing to use it.

C4[1] videos available

Wolf Rentzsch, one of the rock stars of the old Mac Hack conference, launched the C4 conference in its absence. It’s already the premiere indie Mac developer conference, and I hope to go this year.

This week, he announced that videos of the 2007 sessions are being rolled out, one a week.

It’s great, but I just don’t see myself having time to sit and watch online for that length of time. Given the periodic release of the sessions, this would ideal as a video podcast. Hopefully someone is transcoding from FLV to low-bandwidth iPod-targeted M4V and setting up a feed right now…