Rss

Archives for : finalcut

No launch-day Final Cut Pro X for me

Well, this was an unpleasant surprise:

Mac App Store rejects my purchase of Final Cut Pro X due to inadequate video card

I’ve never really given a lot of thought to my video card… I usually just get the default option for the Mac Pro when I buy it. It’s not like I do any gaming on my Mac (that’s what the Wii, PS2, iPhone, and iPad are for)

So here’s what System Profiler tells me I have:


ATI Radeon HD 2600 XT:

  Chipset Model:	ATI Radeon HD 2600
  Type:	GPU
  Bus:	PCIe
  Slot:	Slot-1
  PCIe Lane Width:	x16
  VRAM (Total):	256 MB
  Vendor:	ATI (0x1002)
  Device ID:	0x9588
  Revision ID:	0x0000
  ROM Revision:	113-B1480A-252
  EFI Driver Version:	01.00.252
  Displays:
SMEX2220:
  Resolution:	1920 x 1080 @ 60 Hz
  Pixel Depth:	32-Bit Color (ARGB8888)
  Main Display:	Yes
  Mirror:	Off
  Online:	Yes
  Rotation:	Supported
  Television:	Yes
Display Connector:
  Status:	No Display Connected

Guess I’m in the market for a better video card for an Early 2008 Mac Pro… what should I get, given that all I need it for is FCP?

Philip Hodgetts on Final Cut Pro rumors

I retweeted this already, but if you care about FCP, read Philip Hodgetts’ A new 64 bit Final Cut Pro? for some excellent analysis about what this could possibly be, given the respective capabilities and release timing of FCP, QTKit, AV Foundation, and Lion:

My biggest doubt was the timing. I believed a rewritten 64 bit Final Cut Pro would require a rewritten 64 bit QuickTime before it can be developed and clearly that wasn’t a valid assumption. Speculating wildly – to pull off a fully rewritten, 64 bit pure Cocoa Final Cut Pro – would require building on AVFoundation (the basis of iMovie for iPhone), which is coming to OS X in 10.7 Lion.

Connecting the Dots

Philip Hodgetts e-mailed me yesterday, having found my recent CocoaHeads Ann Arbor talk on AV Foundation, and searching from there to find my blog. The first thing this brings up is that I’ve been slack about linking my various online identities and outlets… it should be easier for anyone who happens across my stuff to be able to get to it more easily. As a first step, behold the “More of This Stuff” box at the right, which links to my slideshare.net presentations and my Twitter feed. The former is updated less frequently than the latter, but also contains fewer obscenities and references to anime.

Philip co-hosts a podcast about digital media production, and their latest episode is chock-ful of important stuff about QuickTime and QTKit that more people should know (frame rate doesn’t have to be constant!), along with wondering aloud about where the hell Final Cut stands given the QuickTime/QTKit schism on the Mac and the degree to which it is built atop the 32-bit legacy QuickTime API. FWIW, between reported layoffs on the Final Cut team and their key programmers working on iMovie for iPhone, I do not have a particularly good feeling about the future of FCP/FCE.

Philip, being a Mac guy and not an iOS guy, blogged that he was surprised my presentation wasn’t an NDA violation. Actually, AV Foundation has been around since 2.2, but only became a document-based audio/video editing framework in iOS 4. The only thing that’s NDA is what’s in iOS 4.1 (good stuff, BTW… hope we see it Wednesday, even though I might have to race out some code and a blog entry to revise this beastly entry).

He’s right in the podcast, though, that iPhone OS / iOS has sometimes kept some of its video functionality away from third-party developers. For example, Safari could embed a video, but through iPhone OS 3.1, the only video playback option was the MPMoviePlayerController, which takes over the entire screen when you play the movie. 3.2 provided the ability to get a separate view… but recall that 3.2 was iPad-only, and the iPad form factor clearly demands the ability to embed video in a view. In iOS 4, it may make more sense to ditch MPMoviePlayerController and leave MediaPlayer.framework for iPod library access, and instead do playback by getting an AVURLAsset and feeding it to an AVPlayer.

One slide Philip calls attention to in his blog is where I compare the class and method counts of AV Foundation, android.media, QTKit, and QuickTime for Java. A few notes on how I spoke to this slide when I gave my presentation:

  • First, notice that AV Foundation is already larger than QTKit. But also notice that while it has twice as many classes, it only has about 30% more methods. This is because AV Foundation had the option of starting fresh, rather than wrapping the old QuickTime API, and thus could opt for a more hierarchical class structure. AVAssets represent anything playable, while AVCompositions are movies that are being created and edited in-process. Many of the subclasses also split out separate classes for their mutable versions. By comparison, QTKit’s QTMovie class has over 100 methods; it just has to be all things to all people.

  • Not only is android.media smaller than AV Foundation, it also represents the alpha and omega of media on that platform, so while it’s mostly provided as a media player and capture API, it also includes everything else media-related on the platform, like ringtone synthesis and face recognition. While iOS doesn’t do these, keep in mind that on iOS, there are totally different frameworks for media library access (MediaPlayer.framework), low-level audio (Core Audio), photo library access (AssetsLibrary.framework), in-memory audio clips (System Sounds), etc. By this analysis, media support on iOS is many times more comprehensive than what’s currently available in Android.

  • Don’t read too much into my inclusion of QuickTime for Java. It was deprecated at WWDC 2008, after all. I put it in this chart because its use of classes and methods offered an apples-to-apples comparison with the other frameworks. Really, it’s there as a proxy for the old C-based QuickTime API. If you counted the number of functions in QuickTime, I’m sure you’d easily top 10,000. After all, QTJ represented Apple’s last attempt to wrap all of QuickTime with an OO layer. In QTKit, there’s no such ambition to be comprehensive. Instead, QTKit feels like a calculated attempt to include the stuff that the most developers will need. This allows Apple to quietly abandon unneeded legacies like Wired Sprites and QuickTime VR. But quite a few babies are being thrown out with the bathwater — neither QTKit nor AV Foundation currently has equivalents for the “get next interesting time” functions (which could find edit points or individual samples), or the ability to read/write individual samples with GetMediaSample() / AddMediaSample().

One other point of interest is one of the last slides, which quotes a macro seen throughout AVFoundation and Core Media in iOS 4:


__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_4_0);

Does this mean that AV Foundation will appear on Mac OS X 10.7 (or hell, does it mean that 10.7 work is underway)? IMHO, not enough to speculate, other than to say that someone was careful to leave the door open.

Update: Speaking of speaking on AV Foundation, I should mention again that I’m going to be doing a much more intense and detailed Introduction to AV Foundation at the Voices That Matter: iPhone Developer Conference in Philadelphia, October 16-17. $100 off with discount code PHRSPKR.

Video Editing with Haddocks

News.com.com.com.com, on evidence of a new Apple video format in iMovie 8.0.5

Dubbed iFrame, the new video format is based on industry standard technologies like H.264 video and AAC audio. As expected with H.264, iFrame produces much smaller file sizes than traditional video formats, while maintaining its high-quality video. Of course, the smaller file size increases import speed and helps with editing video files.

Saying smaller files are easier to edit is like saying cutting down the mightiest tree in the forest is easier with a haddock than with a chainsaw, as the former is lighter to hold.

The real flaw with this is that H.264, while a lovely end-user distribution format, uses heavy temporal compression, potentially employing both P-frames (“predicted” frames, meaning they require data from multiple earlier frames), and B-frames (“bidirectionally predicted” frames, meaning they require data from both earlier and subsequent frames). Scrubbing frame-by-frame through H.264 is therefore slowed by sometimes having to read in and decompress multiple frames of data in order to render the next one. And in my Final Cut experience, scrubbing backwards through H.264 is particularly slow; shuttle a few frames backwards and you literally have to let go of the wheel for a few seconds to let the computer catch up. For editing, you see a huge difference when you use a format with only I-frames (“intra” frames, meaning every frame has all the data it needs), such as M-JPEG or Pixlet.

You can use H.264 in an all-I-frame mode (which makes it more or less M-JPEG), but then you’re not getting small file-sizes meant for end-user distribution. I’ll bet that iFrame employs H.264 P- and B-frames, being aimed at the non-pro user whose editing consists of just a handful of cuts, and won’t mind the disk grinding as they identify the frame to cut on.

But for more sophisticated editing, having your source in H.264 would be painful.

This also speaks to a larger point of Apple seemingly turning its back on advanced media creatives in favor of everyday users with simpler needs. I’ve been surprised at CocoaHeads meetings to hear that I’m not the only one who bemoans the massive loss of functionality from the old 32-bit C-based QuickTime API to the easier-to-use but severely limited QTKit. That said, everyone else expects that we’ll see non-trivial editing APIs in QTKit eventually. I hope they’re right, but everything I see from Apple, including iFrame’s apparent use of H.264 as a capture-time and therefore edit-time format, makes me think otherwise.

iPhone 3GS vs. the World

First iPhone 3GS nit: refuses to charge when connected to the USB 2.0 port of the Bella USA Final Cut Keyboard:
Screenshot 2009.06.19 13.23.07

Can’t wait to see if it balks at connecting to the “Built for iPod / Works with iPhone” car radio I bought four months ago.

My emerging mental media taxonomy

Back when we did the iPhone discussion on Late Night Cocoa, I made a point of distinguishing the iPhone’s media frameworks, specifically Core Audio and friends (Audio Queue Services, Audio Session, etc.), from “document-based” media frameworks like QuickTime.

This reflects some thinking I’ve been doing over the last few months, and I don’t think I’m done, but it does reflect a significant change in how I see things and invalidates some of what I’ve written in the past.

Let me explain the breakdown. In the past, I saw a dichotomy between simple media playback frameworks, and those that could do more: mix, record, edit, etc. While there are lots of media frameworks that could enlighten me (I’m admittedly pretty ignorant of both Flash and the Windows’ media frameworks), I’m now organizing things into three general classes of media framework:

  • Playback-only – this is what a lot of people expect when they first envision a media framework: they’ve got some kind of audio or audio/video source and they just care about rendering to screen and speakers. As generally implemented, the source is generally opaque, so you don’t have to care about the contents of the “thing” you’re playing (AVI vs. MOV? MP3 vs. AAC? DKDC!), but you also can’t generally do anything with the source other than play it. Your control may be limited to play (perhaps at a variable rate), stop, jump to a time, etc.

  • Stream-based – In this kind of API, you see the media as a stream of data, meaning that you act on the media as it’s being processed or played. You generally get the ability to mix multiple streams, and add your own custom processing, with the caveat that you’re usually acting in realtime, so anything you do has to finish quickly for fear you’ll drop frames. It makes a lot of sense to think of audio this way, and this model fits two APIs I’ve done significant work with: Java Sound and Core Audio. Conceptually, video can be handled the same way: you can have a stream of A/V data that can be composited, effected, etc. Java Media Framework wanted to be this kind of API, but it didn’t really stick. I suspect there are other examples of this that work; the Slashdot story NVIDIA Releases New Video API For Linux describes a stream-based video API in much the same terms: ‘The Video Decode and Presentation API for Unix (VDPAU) provides a complete solution for decoding, post-processing, compositing, and displaying compressed or uncompressed video streams. These video streams may be combined (composited) with bitmap content, to implement OSDs and other application user interfaces.’.

  • Document-based – No surprise, in this case I’m thinking of QuickTime, though I strongly suspect that a Flash presentation uses the same model. In this model, you use a static representation of media streams and their relationships to one another: rather than mixing live at playback time, you put information about the mix into the media document (this audio stream is this loud and panned this far to the left, that video stream is transformed with this matrix and here’s its layer number in the Z-axis), and then a playback engine applies that mix at playback time. The fact that so few people have worked with such a thing recalls my example of people who try to do video overlays by trying to hack QuickTime’s render pipeline rather than just authoring a multi-layer movie like an end-user would.

I used to insist that Java needed a media API that supported the concept of “media in a stopped state”… clearly that spoke to my bias towards document-based frameworks, specifically QuickTime. Having reached this mental three-way split, I can see that a sufficiently capable stream-based media API would be powerful enough to be interesting. If you had to have a document-based API, you could write one that would then use the stream API as its playback/recording engine. Indeed, this is how things are on the iPhone for audio: the APIs offer deep opportunities for mixing audio streams and for recording, but doing something like audio editing would be a highly DIY option (you’d basically need to store edits, mix data, etc., and then perform that by calling the audio APIs to play the selected file segments, mixed as described, etc.).

But I don’t think it’s enough anymore to have a playback-only API, at least on the desktop, for the simple reason that HTML5 and the <video> tag commoditizes video playback. On JavaPosse #217, the guys were impressed by a blog claiming that a JavaFX media player had been written in just 15 lines. I submit that it should take zero lines to write a JavaFX media player: since JavaFX uses WebKit, and WebKit supports the HTML5 <video> tag (at least on Mac and Windows), then you should be able to score video playback by just putting a web view and an appropriate bit of HTML5 in your app.

One other thing that grates on me is the maxim that playback is what matters the most because that’s all that the majority of media apps are going to use. You sort of see this thinking in QTKit, the new Cocoa API for QuickTime, which currently offers very limited access to QuickTime movies as documents: you can cut/copy/paste, but you can’t access samples directly, insert modifier tracks like effects and tweens, etc.

Sure, 95% of media apps are only going to use playback, but most of them are trivial anyways. If we have 99 hobbyist imitations of WinAmp and YouTube for every one Final Cut, does that justify ignoring the editing APIs? Does it really help the platform to optimize the API for the trivialities? They can just embed WebKit, after all, so make sure that playback story is solid for WebKit views, and then please Apple, give us grownups segment-level editing already!

So, anyways, that’s the mental model I’m currently working with: playback, stream, document. I’ll try to make this distinction clear in future discussions of real and hypothetical media APIs. Thanks for indulging the big think, if anyone’s actually reading.

Slideshows aren’t movies (unless they are)

Since returning to GRR, I’ve let the Mac Pro run overnight, downloading 30-some WWDC 2008 videos, which just became available on Friday. At an average of 500 MB each, I’m probably burning through 15 GB of data, which means that under some bandwidth-rationing regimes, I’d be closing in on a bandwidth cap. I ranted about this before, but this is a textbook case of why the US’ broadband oligopoly is going to hurt the country in the long run: somewhere out there, there’s an iPhone developer on Comcast or AT&T who can’t get needed iPhone development info because he or she has hit an arbitrary bandwidth cap. American developers are at a disadvantage, relative to the rest of the world, thanks to the crooked arrangement of just having one or two providers (if any) in a given area, enjoying a government-protected monopoly while not being expected to provide any specific level of service.

We don’t have a single provider taking care of everyone in the public interest, nor do we enjoy the benefits of genuine competition. What we’ve got is good old fashioned US-style cronyism, something that will only get worse as the government frantically borrows more money (WTF?) to buy more of the private economy (WTF?)

Having said that… why the heck are all these videos a half gig anyways?

If you look at the previous years’ WWDC videos or similar ADC on iTunes content (Leopard Tech Talks, for example), you’ll notice that by and large, the video portion of the file is just the slides. Except for the transitions and any demos, the video portion of the presentation doesn’t move. Yet if you zoom in on the text, you’ll see a little bit of an artifact that jumps every second or so as it hits a keyframe, meaning this non-moving content has been encoded as if it were natural, moving video.

And it’s a massive waste of bandwidth. Since QuickTime supports variable frame rates, you could have a single frame (i.e., a slide) that stays up for 10 or 20 or 60 seconds, and only need the data for that frame once. Then for the next slide, you’d only need one sample, however long it is. There are slideshow-movie-maker examples for QuickTime that do exactly this (I think I even did one in QTJ a long time ago, but it’s not in the book [should have been] and I don’t know where I posted it, if I did). I suspect you could do the slides with a lossless codec, like PNG or even Animation (which is just RLE), and still get a huge space saving over the current process of re-encoding that keyframe every second, and providing B-frames (deltas) that don’t convey any information because the image doesn’t actually change.

In fact, you could still use natural video for your demos by simply having a second video track that would contain samples only for those times when you’re actually doing video. Moral of story: QuickTime creative abilities remain freaking awesome.

So why not do this? Well, obviously, the videos need to be playable on iPods and iPhones, which only support H.264 (though I’d be interested to know if H.264 can do variable frame rates… I assume it can’t, at least not in the Low Complexity profile, but it’d be a real nice feature for exactly this kind of thing). You could use the variable length samples in production — drop some PNGs on a video track in your Final Cut timeline and stretch their durations and you’re doing exactly that — but once you export, you’ll get the same constant-frame-rate H.264 that you would if you were exporting your kids’ Halloween videos. Alas.

First 40 seconds of my first AMV

A while back, I mentioned planning to start work on an anime music video (AMV), as a means of improving my Final Cut skills, and thereby getting my developer head more in line with what’s needed by actual users of media software. I also blogged a couple times (1, 2, 3) about how the process of ripping, de-interlacing, and re-encoding the video from DVD was going.

So, update. Earlier in the month, at the Java Posse Roundup, I did a five-minute lightning talk on AMVs (hey, the topics were wide open, and this has genuine geek culture relevance), so I wanted to get mine started before then, to show it as a work in progress. Joe Nuxoll of the Posse and Dianne Marsh of the Ann Arbor JUG and CodeMash recorded the talks by hand with a digital camera, so you can see my AMV mini-talk on YouTube.

Also, I’ve exported what I’ve got done as an MP4:

Note: the above video uses the HTML5 <video> tag, falling back on the QuickTime plugin if that’s not available. Works in Safari, WebKit nightly, and Firefox… haven’t tried anything else

A couple thoughts so far:

  • I think I spent 5-10 hours just logging, creating subclips for use later.
  • The Bella Final Cut Keyboard is a massive time saver for editing. I did the last three edits with mouse and keyboard while travelling and it was burdensome compared with the ease of just jogging to the needed frame and clicking the in- or out-point button.
  • Some of the source video is a little jumpy (the second edit might merit a redo, because I cut into it right on a jump). Going frame by frame through the source material, there also seems to be a little bit of frame damage at the bottom of every frame preceding an edit… presumably evidence of its being hand-edited film, from the time before anime production techniques went all-digital
  • After tightly timing the first few edits to the guitar “ping”s, I allowed them to get looser during the guitar intro. It’s probably too loose, too vague, for some of the “Miyazawa vanity montage” (losing the source video’s dissolve to the flower background shot, and finding a different way into the “towering above the crowd” shot, might help)
  • Probably want to lip synch Miyazawa in the doorway on that first line of lyrics.

Update: Ah, I’m my own worst critic. Looking at it again, I should at least chalk up credits for the parts I think work:

  • The establishing shots on the guitar “pings” work, and gradually establish the location by getting closer each shot.
  • Continuity holds up, as the sequence gradually moves down the hallway and into the classroom. Most of those shots are from the same sequence in the first episode, but I remember one (Arima’s reaction maybe?) was actually borrowed from much later. There are lots of great shots I didn’t use here because of continuity (location and costume, mostly). I imagine each sequence will largely use video from a narrow period of time, so shots match.
  • The idea for this first sequence is to establish Miazawa, and I think it’s going in the right direction. Her crazy moments in the first few episodes should work with the lyrics, up through and including the chorus “I’m bad news, baby I’m bad news/ I’m just bad news, bad news, bad news”, cutting to another outrage (like her punching Arima) on each “bad news”. General road map after that:
    • Break and second verse: establish Miyazawa/Arima romance
    • Bridge (“I’m just damage control…”, etc.): inner monologue sequence (His and Her Circumstances has lots of these).
    • Instrumental break (“‘cuz we’ll all need/ portions for foxes”): Maybe super Miyazawa into the solarized effects shots of the classrooms, hallways (there’s a spinning shot from like episode 20 that could cap such a sequence), then back into the establishing locations for the reprise of the opening pings that gets us into the…
    • Third verse: Switch to Arima’s POV (“there’s a pretty young thing in front of you/ and she’s real pretty and she’s real into you”)
    • Third chorus: Back to Miyazawa’s POV (“you’re bad news/ my friends tell me to leave you”)
    • Final chorus: Joyous romantic shots (“you’re bad news/ that’s OK, I like you”) get us to big conclusion and out

Still, it’s good to have it started. Of course, now I’ve committed a massive amount of time to iPhone projects, so I don’t expect to look at this again until sometime after June. I’d originally planned to do this AMV — Rilo Kiley’s “Portions for Foxes” audio with His and Her Circumstances video — as my “learning experience”, then move onto a second video for which I have distinctly more concrete plans. But Anime Weekend Atlanta‘s cut-off for the AMV expo is usually in mid-August, so I’ll be lucky if I can even get this first one done in time to enter it in the expo.

Arsenic and old interlace

Rather than continue to post comments to my previous blog about trying to rip a DVD, de-interlace it, and convert it to an editing-friendly codec, I’m posting the followup as its own message.

The overnight re-encode with JES Interlacer failed just like the previous attempt that made the 40GB file: it got stuck on one frame of the complete video track and padded out the last 40% of the movie with that. At least with a fixed data rate of 3000 Kbps, the broken file wasn’t 40 frickin’ gig…

So then I opened the full-length video track m2v file with QuickTime Pro (which pinwheeled for like 10 minutes) and, noticing that the Pixlet export dialog had a “deinterlace” checkbox, tried using that for my export.

QTPro export and transcode

Unfortunately, it too ended up getting stuck on the same frame.

So, plan D (or was I on “E” by this point?) was to go back to MPEG Streamclip, open the demuxed m2v file (which, remember, was created by Streamclip in the first place, from all the VOB files) and do the “export to QuickTime” from there, again depending on the Pixlet exporter to handle the deinterlace. The 3000 Kbps export from before looked like ass, so I went up to 10 Mbps.

MPEG Streamclip export to Pixlet

Ah, finally! After a three-hour encode, I finally got the whole thing exported to Pixlet, with deinterlace:

Exported DVD rip with Pixlet video

The file is about 10 GB, and there are still some artifacts on a few high-contrast places (e.g., panning across dense black-on-white text). However, it scrubs like a dream, something you don’t get with a codec meant for playback, like H.264. We tend to forget how you need different codecs for different reasons, like whether you can afford asymmetry in encoding and decoding (e.g., a movie disc that is encoded once, and played millions of times, is a scenario which tolerates a slow and expensive encode so long as the decode is fast and cheap). In the case of editing, you want codecs that allow you to access frames quickly and cheaply in either direction, which tends to rule out temporal compression. The Ishtori tutorial’s editing codecs page recommends the RLE-based Animation codec, Pixlet, Photo-JPEG, or the commercial SheerVideo codec.

I might encode at an even higher bitrate for my second AMV project, but for now, I’m not super keen on burning up the last 100 GB of my drive space, so I’m OK with the current quality/size tradeoff.

Now to rip discs 2-5 of His and Her Circumstances and either start storyboarding or at least writing out a rudimentary shot sheet (smart) or throwing down edits like I know what I’m doing (dumb… but probably what I’ll do, since this is more experimental than anything).

Adventures in deinterlacing…

So, a while back, I mentioned wanting to edit an AMV. I started laying the groundwork for that today, and so far, it’s an uphill climb. I’m working from Ishitori.net’s guide to making AMVs on the Mac, a helpful reference since most of the easily-found AMV guides are for Windows users.

So far, though, just ripping the DVDs in an editing-friendly format is a struggle. I had originally thought it would be as simple as going through Handbrake and making sure to account for interlacing. Problem is, Handbrake is far more inclined to give you a playback-oriented transcode (e.g., H.264), than something amenable to scrubbing in Final Cut. Ishitori’s guide suggests using MacTheRipper to de-CSS the files and get plain ol’ VOBs on your hard drive. Check. To do anything with them, of course, you need the QuickTime MPEG-2 Component, which I had a copy of like 8 years ago at Pathfire, but ended up re-buying today.

The next couple steps involve getting the footage in shape for Final Cut. That means:

  • Deinterlace
  • Convert to square pixels
  • Demux (actually, AMVs generally only need the video track
  • Transcode to Motion-JPEG, Pixlet, or some other editing codec

Ishitori suggests using Avidemux for deinterlacing and general image filtering, but I found both the X11 and Qt versions to be completely unusable. The X11 version won’t open a file unless you run it as root from the command line, and even then it seems to mis-read its plugin files. The Qt version just crashes a lot, and can’t work with any drive other than the boot volume (hilarious). Tried building from the latest sources in subversion, but that failed too, and I wasn’t really inclined to go on a wild dependency chase.

Plan B: I found JEI Deinterlacer. Cool! Oh wait, it won’t read later VOB segments from a long rip. Not so cool, unless you only want to deinterlace the menus and not the main program.

Plan C: Use MPEG Streamclip to pull all the VOBs together, and demux the entire video stream into another file.

OK, this might work…
MPEG Streamclip demux preview

Takes about 5 minutes…
MPEG Streamclip progress

Next, open it in JES Deinterlacer. Not the most intuitive GUI ever, but all these video tools have a million options and read like a brick.

JES Deinterlacer open panel

Downside #1: JES Deinterlacer pegs the CPU and puts up a SPOD (“spinning pinwheel of death”) for about 10 minutes while opening the file.

Anyways, JES Deinterlacer can transcode as you go, so I figure I’ll save a step and export my deinterlaced video to Pixlet.

JES Deinterlacer progress

Only two problems here: first, I didn’t set a bitrate and figured I’d let QuickTime decide. That defaulted to the highest possible quality (about 30 Mbps) and a 40 GB file. Kind of overkill, considering the source was 5 Mbps MPEG-2. Other problem is that at some point about 2/3 of the way through the file, it got stuck on one image and encoded the rest of the file with that same image. Niiiice.

Anyways, I’m letting it run again with a saner output bitrate for Pixlet (6 Mbps, which should give me a 6-7 GB file), and hoping that it doesn’t get locked on one frame again. It started about an hour ago, and looks to be about 25% done (on the dual 1.8 G5… would be interesting to see how it performs on a Core 2 Duo).

So, maybe I’ll be ready to edit when I get up tomorrow, or maybe I’ll be really pissed off.