Rss

Archives for : video

Conferences Are Coming

Fall’s coming, evenings are getting shorter (sorry, Southern Hemisphere, but roll with me here), and the Fall conferences are gearing up. Last week was CocoaConf Portland, and here’s me teaching the iPad Productivity APIs class (from the CocoaConf Flickr):

Chris Adamson in iPad Productivity workshop

One thing I need to get in before the jump: CocoaConf organizer Dave Klein was on episode 15 of the My Appventure podcast, and the show notes page linked above has a 20% off code for the next three CocoaConfs: Columbus, OH (Sept. 27-8), Boston, MA (Oct. 25-6), and Atlanta, GA (Nov. 15-6). I’m teaching all-day classes the day before each of these: iPad Productivity in Columbus, and Core Audio in Boston and Atlanta.

Continue Reading >>

CocoaConf Tour (Late 2013) and Core Audio video

A couple speaky/selly things real quick…

As mentioned in earlier posts, I’m speaking at all four of the upcoming CocoaConfs. I’m reprising my all-day tutorials:

  • iPad Productivity (UIDocument, autosave, iCloud, PDF/printing, inter-app doc exchange) in Portland (August) and Columbus (September)
  • Core Audio in Boston (October) and Atlanta (November)

I’m also doing two regular hour-long sessions, on Audiobus and A/V encoding. For Audiobus, feel free to abandon any angst that this much-loved third party tool for inter-application audio will be obsoleted and abandoned by Apple’s announced introduction of an inter-app audio framework in iOS 7. The Audiobus team announced that Audiobus will adopt Apple’s new APIs when running under iOS 7, meaning you’ll get compatibility with both Audiobus-enabled apps and those that use Apple’s new APIs. So it’s still well worth learning about if you’re into audio; I’m working on some demo code to show it off. Thinking I might bring back the Dalek ring modulator code from 360iDev a few years back and wrap it as an Audiobus effect (Hi Janie!)

Continue Reading >>

Never Mind the WWDC, Here’s the CocoaConfs

So now CocoaConf Alt isn’t happening. A story at Loop Insight lays the blame more clearly at Apple’s feet for pressuring the Intercontinental hotel, which apparently has some contractural relationship with Apple during WWDC (it’s not said what… possibly housing Apple employees or off-site partner meetings?), and that this contract forbids the hotel from hosting a “competing” event.

I’ve spoken at nearly all the CocoaConf conferences, and I have no reason to doubt Dave’s version of these events. Indeed, while some commenters would like to portray this as a spat solely between the Intercontinental and CocoaConf – and leave Apple out of it – that position doesn’t square with the facts. If the Intercontinental knew they were contracturally prohibited from hosting CocoaConf Alt, they wouldn’t have signed a contract with CocoaConf in the first place, right? Daniel Jalkut makes the best case for letting Apple off the hook, suggesting that someone at either the Intercontiental or Apple got a trigger finger and killed the event when they didn’t necessarily need to. That the Intercontinental realized it was in a conflict-of-interest scenario after the fact is possible, but it’s no more plausible than the idea that Apple doesn’t like anyone riding on their coattails and sent the hotel management a nastygram.

For what it’s worth, that latter scenario is the one that rings true to me. (Or, to haul out a tag I haven’t used in a while, nefarious skullduggery!).

Continue Reading >>

Point and Laugh: The End of WebM

As hinted last week, Mozilla has finally seen the writing on the wall and decided to support H.264 in the <video> tag in Firefox. Probably inevitable given that

  1. Mozilla is making a new push into mobile
  2. The default standard for web video has become H.264 <video> for mobile and Flash for the desktop
  3. Adobe is abandoning Flash for mobile

Taken together, these three points mean that continuing to reject H.264 <video> (and implicitly depend on Flash to support the video content that actually exists in the wild) might well leave mobile Firefox unable to play any meaningful amount of video at all. And certainly somebody somewhere must have realized that H.264 is the most popular encoding format within Flash movies, meaning that Mozilla’s attempt to push developers into other codecs simply wasn’t working.

Not that <video> is a cure-all; Streaming Media’s Jan Ozer makes the case that it’s not there, and probably never will be, at least not for non-trivial uses.

But as Streaming Media’s Troy Dreier points out in today’s article, at least this will rid us of the distraction of WebM: “This move drives a major nail in the WebM coffin, making WebM only slightly more relevant than Ogg Theora.”

Google fans among my Tweeps and +1’ers argue that WebM only failed because Google didn’t push it hard enough, but I think they’re wrong: WebM was always a dumb idea, one whose only advantage was political correctness among a holier-than-thou group of developers, few of whom had particularly deep knowledge of digital media beyond patent and licensing politics. Looking back at my earlier anti-WebM screed, my only regret is not slamming it even harder than I did.

Guess it’s just as well that I never created an webm category on the blog… turns out I wouldn’t ever need it.

No launch-day Final Cut Pro X for me

Well, this was an unpleasant surprise:

Mac App Store rejects my purchase of Final Cut Pro X due to inadequate video card

I’ve never really given a lot of thought to my video card… I usually just get the default option for the Mac Pro when I buy it. It’s not like I do any gaming on my Mac (that’s what the Wii, PS2, iPhone, and iPad are for)

So here’s what System Profiler tells me I have:


ATI Radeon HD 2600 XT:

  Chipset Model:	ATI Radeon HD 2600
  Type:	GPU
  Bus:	PCIe
  Slot:	Slot-1
  PCIe Lane Width:	x16
  VRAM (Total):	256 MB
  Vendor:	ATI (0x1002)
  Device ID:	0x9588
  Revision ID:	0x0000
  ROM Revision:	113-B1480A-252
  EFI Driver Version:	01.00.252
  Displays:
SMEX2220:
  Resolution:	1920 x 1080 @ 60 Hz
  Pixel Depth:	32-Bit Color (ARGB8888)
  Main Display:	Yes
  Mirror:	Off
  Online:	Yes
  Rotation:	Supported
  Television:	Yes
Display Connector:
  Status:	No Display Connected

Guess I’m in the market for a better video card for an Early 2008 Mac Pro… what should I get, given that all I need it for is FCP?

Wrap up from Voices That Matter iPhone, Spring 2011

Ugh, this is twice in a row that I’ve done a talk for the Voices That Matter: iPhone Developer Conference and been able to neither get all my demos working perfectly in time, nor to cover all the important material in 75 minutes. Yeah, doing a 300-level talk will do that to you, but still…

This weekend’s talk in Seattle was “Advanced Media Manipulation with AV Foundation”, sort of a sequel to the intro talk I did at VTM:i Fall 2010 (Philly), but since the only people who would have been at both conferences are speakers and organizers, I spent about 25 minutes recapping material from the first talk: AVAssets, the AVPlayer and AVPlayerLayer, AVCaptureSession, etc.

Aside: AVPlayerLayer brings up an interesting point, given that it is a subclass of CALayer rather than UIView, which is what’s provided by the view property of the MPMoviePlayerController. What’s the big difference between a CALayer and a UIView, and why does it matter for video players? The difference is that UIView subclasses UIResponder and therefore responds to touch events (the one in the Media Player framework has its own pop-up controls after all), whereas a CALayer, and AVPlayerLayer, does not respond to touch input itself… it’s purely visual.

So anyways, on to the new stuff. What has interested me for a while in AV Foundation is the classes added in 4.1 to do sample level access, AVAssetWriter and AVAssetReader. An earlier blog entry, From iPod Library to PCM Samples in Far Fewer Steps Than Were Previously Necessary, exercises both of these, reading from an iPod Library song with an AVAssetReader and writing to a .caf file with an .

Before showing that, I did a new example, VTM_ScreenRecorderTest, which uses AVAssetWriter to make an iOS screen recorder for your application. Basically, it runs an onscreen clock (so that something onscreen is changing), and then uses an NSTimer to periodically do a screenshot and then write that image as a video sample to the single video track of a QuickTime .mov file. The screenshot code is copied directly from Apple’s Technical Q&A 1703, and the conversion from the resulting UIImage to the CMSampleBufferRef needed for writing raw samples is greatly simplified with the AVAssetWriterInputPixelBufferAdaptor.

In the Fall in Philly, I showed a cuts-only movie editor that just inserted segments up at the AVMutableComposition level. For this talk, I wanted to do multiple video tracks, with transitions between them and titles. I sketched out a very elaborate demo project, VTM_AVEffects, which was meant to perform the simple effects I used for the Running Start (.m4v download) movie that I often use an example. In other words, I needed to overlay titles and do some dissolves.

About 10 hours into coding my example, I realized I was not going to finish this demo, and settled for getting the title and the first dissolve. So if you’re going to download the code, please keep in mind that this is badly incomplete code (the massive runs of commented-out misadventures should make that clear), and it is neither production-quality, nor copy-and-paste quality. And it most certainly has memory leaks and other unresolved bugs. Oh, and all the switches and text fields? They do nothing. The only things that work are tapping “perform” and then “play” (or the subsequent “pause”). Scrubbing the slider and setting the rate field mostly work, but have bugs, particularly in the range late in the movie where there are no valid video segments, but the :30 background music is still valid.

Still, I showed it and will link to it at the end of this blog because there is some interesting working code worth discussing. Let’s start with the dissolve between the first two shots. You’ll notice in the code that I go with Apple’s recommendation of working back and forth between two tracks (“A” and “B”, because I learned on analog equipment and always think of it as A/B Roll editing). The hard part — and by hard, I mean frustrating, soul-draining, why-the-frack-isn’t-this-goddamn-thing-working hard — is providing the instructions that describe how the tracks are to be composited together. In AV Foundation, you provide an AVVideoComposition that describes the compositing of every region of interest in your movie (oh, I’m sorry, in your AVComposition… which is in no way related to the AVVideoComposition). The AVVideoComposition has an array of AVVideoCompositionInstructions, each covering a specific timeRange, and each containing its own AVVideoCompositionLayerInstruction to describe the opacity and affine transform (static or animated) of each video track. Describing it like that, I probably should have included a diagram… maybe I’ll whip one up in OmniGraffle and post it later. Anyways, this is fairly difficult to get right, as your various instructions need to account for all time ranges across all tracks, with no gaps or overlaps, and timing up identically with the duration of the AVComposition. Like I said, I got exactly one fade-in working before I had to go pencils-down on the demo code and start preparing slides. Maybe I’ll be able to fix it later… but don’t hold me to that, OK?

The other effect I knew I had to show off was titles. AVFoundation has a curious way to do this. Rather than add your titles and other overlays as new video tracks, as you’d do in QuickTime, AVF ties into Core Animation and has you do your image magic there. By using an AVSynchronizedLayer, you can create sublayers whose animations get their timing from the movie, rather than from the system clock. It’s an interesting idea, given how powerful Quartz and Core Animation are. But it’s also deeply weird to be creating content for your movie that is not actually part of the movie, but is rather just loosely coupled to the player object by way of the AVPlayerItem (and this leads to some ugliness when you want to export the movie and include the animations in the export). I also noticed that when I scrubbed past the fade-out of the title and then set the movie playback rate to a negative number to run it backward, the title did not fade back in as expected… which makes me wonder if there are assumptions in UIKit or Core Animation that time always runs forward, which is of course not true when AV Foundation controls animation time, via the AVSynchronizedLayer

My code is badly incomplete and buggy, and anyone interested in a solid demo of AV Foundation editing would do well to check out the AVEditDemo from Apple’s WWDC 2010 sample code. Still, I said I would post what I’ve got, so there you go. No complaints from you people, or the next sample code you get from me will be another goddamned webapp server written in Java.

Oh yeah, at one point, I dreamed of having enough time to write a demo that would process A/V capture data in real-time, using AVCaptureSessionDataOutput, maybe showing a live audio waveform or doing an FFT. But that demo didn’t even get written. Maybe next conference.

For a speaker on “advanced” AV Foundation, I find I still have a lot of unanswered questions about this framework. I’m not sure how well it supports saving an AVComposition that you’re editing — even if the AVF classes implement NSCopying / NSMutableCopying and could therefore be persisted with key-value archiving, that doesn’t address how you’d persist your associated Core Audio animations. I also would have to think hard about how to make edits undoable and redoable… I miss QuickTime’s MovieEditState already. And to roll an edit… dig into a track’s segments and dick with their timeRanges, or do you have to remove and reinsert the segment?

And what else can I do with CASynchronizedLayer? I don’t see particularly compelling transitions in AVF — just dissolves and trivial push wipes (ie, animation of the affine transform) — but if I could render whatever I like in a CALayer and pick up the timing from the synchronized layer, is that how I roll my own Quartz-powered goodness? Speaker Cathy Shive and I were wondering about this idea over lunch, trying to figure out if we would subclass CAAnimation or CALayer in hopes of getting a callback along the lines of “draw your layer for time t“, which would be awesome if only either of us were enough of a Core Animation expert to pull it off.

So, I feel like there’s a lot more for me to learn on this, which is scary because some people think I’m an expert on the topic… for my money, the experts are the people in the AV Foundation dev forums (audio, video), since they’re the ones really using it in production and providing feedback to Apple. Fortunately, these forums get a lot of attention from Apple’s engineers, particularly bford, so that sets a lot of people straight about their misconceptions. I think it’s going to be a long learning curve for all of us.

If you’re keen to start, here are the slides and demo code:

An Encoder Is Not A State Machine

I’m pleasantly surprised that Google’s removal of H.264 from Chrome in favor of WebM has been greeted with widespread skepticism. You’d think that removing popular and important functionality from a shipping product would be met with scorn, but when Google wraps itself with the “open” buzzword, they often seem to get a pass.

Ars Technica’s Google’s dropping H.264 from Chrome a step backward for openness has been much cited as a strong argument against the move. It makes the important point that video codecs extend far beyond the web, and that H.264’s deep adoption in satellite, cable, physical media, and small devices make it clearly inextricable, no matter how popular WebM might get on the web (which, thusfar, is not much). It concludes that this move makes Flash more valuable and viable as a fallback position.

And while I agree with all of this, I still find that most of the discussion has been written from the software developer’s point of view. And that’s a huge mistake, because it overlooks the people who are actually using video codecs: content producers and distributors.

And have they been clamoring for a new codec? One that is more “open”? No, no they have not. As Streaming Media columnist Jan Ozer laments in Welcome to the Two-Codec World,

I also know that whatever leverage Google uses, they still haven’t created any positive reason to distribute video in WebM format. They haven’t created any new revenue opportunities, opened any new markets or increased the size of the pie. They’ve just made it more expensive to get your share, all in the highly ethereal pursuit of “open codec technologies.” So, if you do check your wallet, sometime soon, you’ll start to see less money in it, courtesy of Google.

I’m grateful that Ozer has called out the vapidity of WebM proponents gushing about the “openness” of the VP8 codec. It reminds me of John Gruber’s jab (regarding Android) that Google was “drunk on its own keyword”. What’s most atrocious to me about VP8 is that open-source has trumped clarity, implementability, and standardization. VP8 apparently only exists as a code-base, not as a technical standard that could, at least in theory, be re-implemented by a third party. As the much-cited first in-depth technical analysis of VP8 said:

The spec consists largely of C code copy-pasted from the VP8 source code — up to and including TODOs, “optimizations”, and even C-specific hacks, such as workarounds for the undefined behavior of signed right shift on negative numbers. In many places it is simply outright opaque. Copy-pasted C code is not a spec. I may have complained about the H.264 spec being overly verbose, but at least it’s precise. The VP8 spec, by comparison, is imprecise, unclear, and overly short, leaving many portions of the format very vaguely explained. Some parts even explicitly refuse to fully explain a particular feature, pointing to highly-optimized, nigh-impossible-to-understand reference code for an explanation. There’s no way in hell anyone could write a decoder solely with this spec alone.

Remember that even Microsoft’s VC-1 was presented and ratified as an actual SMPTE standard. One can also contrast the slop of code that is VP8 with the strategic designs of MPEG with all their codecs, standardizing decoding while permitting any encoder that produces a compliant stream that plays on the reference decoder.

This matters because of something that developers have a hard time grasping: an encoder is not a state machine. Meaning that there need not, and probably should not be, a single base encoder. An obvious example of this is the various use cases for video. A DVD or Blu-Ray disc is encoded once and played thousands or millions of times. In this scenario, it is perfectly acceptable for the encode process to require expensive hardware, a long encode time, a professional encoder, and so on, since those costs are easily recouped and are only needed once. By contrast, video used in a video-conferencing style application requires fairly modest hardware, real-time encoding, and can make few if any demands of the user. Under the MPEG-LA game-plan, the market optimizes for both of these use cases. But when there is no standard other than the code, it is highly unlikely that any implementations will vary much from that code.

Developers also don’t understand that professional encoding is something of an art, that codecs and different encoding software and hardware have distinct behaviors that can be mastered and exploited. In fact, early Blu-Ray discs were often authored with MPEG-2 rather than the more advanced H.264 and VC-1 because the encoders — both the devices and the people operating them — had deeper support for and a better understanding of MPEG-2. Assuming that VP8 is equivalent to H.264 on any technical basis overlooks these human factors, the idea that people now know how to get the most out of H.264, and have little reason to achieve a similar mastery of VP8.

Also, MPEG rightly boasts that ongoing encoder improvements over time allow for users to enjoy the same quality at progressively lower bitrates. It is not likely that VP8 can do the same, so while it may be competitive (at best) with H.264, it won’t necessarily stay that way.

Furthermore, is the MPEG-LA way really so bad? Here’s a line from a review in the Discrete Cosine blog of VP8 back when On2 was still trying to sell it as a commercial product:

On2 is advertising VP8 as an alternative to the mucky patent world of the MPEG licensing association, but that process isn’t nearly as difficult to traverse as they imply, and I doubt the costs to get a license for H.264 are significantly different than the costs to license VP8.

The great benefit of ISO standards like VC-1 and H.264 is that anyone can go get a reference encoder or reference decoder, with the full source code, and hack on their own product. When it times come to ship, they just send the MPEG-LA a dollar (or whatever) for each copy and everyone is happy.

It’s hard to understand what benefits the “openness” of VP8 will ever really provide. Even if it does end up being cheaper than licensing H.264 from MPEG-LA — and even if the licensing body would have demanded royalty payments had H.264 not been challenged by VP8 — proponents overlook the fact that the production and distribution of video is always an enormously expensive endeavor. 15 years ago, I was taught that “good video starts at a thousand dollars a minute”, and we’d expect the number is at least twice that today, just for a minimal level of technical competence. Given that, the costs of H.264 are a drop in the bucket, too small to seriously affect anyone’s behavior. And if not cost, then what else does “openness” deliver? Is there value in forking VP8, to create another even less compatible codec?

In the end, maybe what bugs me is the presumption that software developers like the brain trust at Google know what’s best for everyone else. But assuming that “open source” will be valuable to video professionals is like saying that the assembly line should be great for software development because it worked for Henry Ford.

A Big Bet on HTTP Live Streaming

So, Apple announced yesterday that they’ll stream today’s special event live, and everyone immediately assumed the load would crash the stream, if not the whole internet, myself included. But then I got thinking: they wouldn’t even try it if they weren’t pretty damn sure it would work. So what makes them think this will work?

HTTP Live Streaming, that’s why. I banged out a series of tweets (1, 2, 3, 4, 5, 6, 7, 8, 9) spelling out why the nature of HTTP Live Streaming (which I worked with briefly on a fix-up job last year) makes it highly plausible for such a use.

To summarize the spec: a client retrieves a playlist (an .m3u8, which is basically a UTF-8’ed version of the old WinAmp playlist format) that lists segments of the stream as flat files (often .m4a’s for audio, and .ts for video, which is an MPEG-2 transport stream, though Apple’s payload is presumably H.264/AAC). The client downloads these flat files and sends them to its local media player, and refreshes the playlist periodically to see if there are new files to fetch. The sizing and timing is configurable, but I think the defaults are like a 60-second refresh cycle on the playlist, and segments of about 10 seconds each.

This can scale for a live broadcast by using edge servers, which Apple has long depended on Akamai (and others?) for. Apple vends you a playlist URL at a local edge server, and its contents are all on the edge server, so the millions of viewers don’t pound Apple with requests — the load is pushed out to the edge of the internet, and largely stays off the backbone. Also, all the local clients will be asking for the same handful of segment files at the same time, so these could be in in-memory caches on the edge servers (since they’re only 10 seconds of video each). All these are good things.

I do wonder if local 3G cells will be a point of failure, if the bandwidth on a cell gets saturated by iPhone clients receiving the files. But for wired internet and wifi LANs, I suspect this is highly viable.

One interesting point brought up by TUAW is the dearth of clients that can handle HTTP Live Streaming. So far, it’s iOS devices, and Macs with QuickTime X (i.e., running Snow Leopard). The windows version of QuickTime doesn’t support HTTP Live Streaming (being based on the “old” 32-bit QuickTime on Mac, it may effectively be in maintenance mode). Open standard or not, there are no handy HTTP Live Streaming clients for other OS’s, though MacRumors’ VNC-based workaround (which requires you to manually download the .m3u8 playlist and do the refresh yourself), suggests it would be pretty easy to get it running elsewhere, since you already have the ability to play a playlist of segments and just need to automate the playlist refresh.

Dan Leehr tweeted back that Apple has talked a good game on HTTP Live Streaming, but hasn’t really showed much. Maybe this event is meant to change that. Moreover, you can’t complain about the adoption — last December, the App Store terms added a new fiat that any streaming video app must use HTTP Live Streaming (although a February post seems to ratchet this back to apps that stream for more than 10 minutes over the cellular network), so any app you see with a video streaming feature almost certainly uses HLS. At WWDC, Apple boasted about the MLB app using HLS, and it’s a safe bet that most/all other iOS video streaming apps (Netflix, Crunchyroll, etc.) use it too.

And one more thing to think about… MLB and Netflix aren’t going to stream without DRM, right? That’s the other piece that nobody ever talks about with HTTP Live Streaming: the protocol allows for encrypting of the media files. See section 5 of the spec. As much as Apple and its fanboys talk up HTML5 as a rival to and replacement for Flash, this is the thing that should really worry Adobe: commoditizing DRM’ed video streaming.

Ogg: The “Intelligent Design” of digital media

Well, another go ’round with this: HTML5 won’t mandate Ogg as universally-supported codecs, and the freetards are on a tear. I was going to follow up on a JavaPosse thread about this, but I hurled enough abuse onto their list last week.

It’s abundantly clear in this blog that I don’t think Ogg is the solution that its supporters want it to be: I have a whole tag for all the posts where I dismiss Vorbis, Theora, and friends. Among these reasons:

  • I don’t think it’s technically competitive.

  • It certainly isn’t competitive in terms of expertise and mindshare, which is vitally important in media codecs: there’s a much deeper pool of shared knowledge about the MPEG codecs, which leads to chip-level support, competition among encoders, compressionists who understand the formats and how to get the most out of them, etc.

  • Its IP status remains unclear. With even MPEG-4, following a lengthy and formal patent pooling process, attacked by AT&T’s claim of a submarine patent, I have no reason to think that Ogg wouldn’t face similar claims, legitimate or not, if there was any money behind it, which there isn’t.

  • If I go to my former colleagues at CNN or in Hollywood and say “you guys should use Ogg because…”, there are no words in the English language that plausibly complete the sentence and appeal to the rational self-interest of the other party.

On this last point, I’ve got an ugly analogy: just as proponents of “Intelligent Design” are people who don’t really care about biology beyond the point at which it intrudes on their religious belief, so too do I think Ogg advocates generally don’t know much about media, but have become interested because the success of patent-encumbered formats and codecs is an affront to their open-source religion.

Ogg’s value is in its compatibility with the open source religion. It has little to offer beyond that, so it’s no surprise that it has zero traction outside of the Linux zealot community. Even ESR realized that continually shouting “everything should be in Ogg” was a losing strategy, and he said that three years ago.

I think the open source community would like to use HTML5 to force Ogg on the web community, but it’s not going to work. As others have pointed out, there’s little reason to think that IE will ever support HTML5. Even if they do, the <video> tag is not going to replace Flash or Silverlight plug-ins for video. Despite my initial enthusiasm for the <video> tag commoditizing video, I see nothing in the spec that would support DRM, and it’s hard to imagine Big Content putting their stuff on web pages without DRM anytime soon. And while you can put multiple media files in a <video> tag easily enough, having to encode/transcode to multiple formats is one reason that Big Content moved away from the Real/WMP/QuickTime switch to the relative simplicity of works-for-everyone Flash.

I’m tired of being lectured by computer people about media; it’s as ludicrous as being lectured about computers by my old boss at Headlines. Just because you use YouTube, doesn’t make you an expert, any more than my knowing how to use a username and password means I understand security (seriously, I don’t, and doubt I ever will). Kirill Grouchnikov pretty much nailed what computer people think good video is with this tweet. I’ll add this: there are probably a thousand people at Sun who understand 3D transformations in OpenGL, and maybe five who know what an Edit Decision List is. So they go with what they know.

A couple years back, I gave a JavaOne BoF in which I made a renewed call for a Java Media library which would support sample-level access and some level of editing, arguing that enabling users to take control of their own media was a manifest requirement of future media frameworks. By a show of hands, most of the developers in the audience thought it would be “enough” to just support playback of some modern codecs. JavaFX now provides exactly that. Happy now?

People who actually work in media don’t mind paying for stuff, and don’t mind not owning/sharing the IP. Video production professionals are so accustomed to standardizing on commercial products, many of them become generic nouns in industry jargon: “chyron” for character generators, “grass valley” for switchers, “teleprompters”, “betacam” tape, etc. Non-free is not a problem here. And if your argument for open-source is “you’re free to fix it if it doesn’t do what you want it to,” the person who has 48 shows a day to produce is going to rightly ask “why would I use something that doesn’t work right on day one?”

The open source community doesn’t get media. Moreover, it doesn’t get that it doesn’t get media. The Ogg codecs placate the true believers, and that’s the extent of their value.

ComposiCoaxiaComponent cables at Target

Maybe all the laid-off Circuit City staffers can get jobs helping customers at Target, which apparently could use the help:

Click the image and look closely: this sign (from a Grand Rapids store, but presumably the same everywhere) identifying analog A/V connections gets composite and coaxial cables backwards. Composite is the yellow RCA cable, which has luminance and chrominance on the same wire (as opposed to say, S-Video), and is completely independent of an audio signal, while coaxial is the twist-on cable with the little core wire sticking out from the white insulator.

I know, you’d rather not have to use either, though my Wii is actually on composite right now because I’ve used up my component inputs with PS2 and the DirecTV HD DVR. I should probably move the Wii to S-Video at least, but someday when I have a little money, the old tube HDTV will move out of the basement and we’ll get a flatscreen with HDMI and switch all the inputs to that. Or whatever the next crazy cable standard is.