Rss

Archives for : November2007

LOLWGA

Parody photo removed by request… original at Entertainment Weekly

As a one-time wannabe screenwriter, is it easy to take sides in the writers’ strike? Not really. I’m not read-in enough to know whether or not they’re really getting a raw deal from the producers (I should probably re-subscribe to Variety, but jeeez is it expensive), and moreover, I just don’t think it matters.

Several commentators I like have scoffed at the idea of creatives going on strike — This Week in Media‘s Alex Lindsay recalls what happened to the strike-ridden steel industry in his hometown of Pittsburgh, and Fake Steve offered a profanity-laced tirade against the writers. And Fake Steve’s hyperbole aside, they’re both largely on the mark that the strike won’t end well, but I don’t think either quite expresses the pointlessness of it completely.

To wit: while Hollywood is arguing about divvying up the money when someone watches Two And A Half Men on the Internet, they’re conveniently ignoring the fact that nobody with access to everything on the Internet would waste their time watching Two And A Half Men.

To me, the strike scenario is as if Broadway playwrights had gone on strike in 1955 to angle for more money from television broadcasts of their plays, ignorant of how the new medium would develop new forms and render Broadway inert and irrelevant within a generation (prima facie evidence: the existence of Legally Blonde: The Musical).

The writers may be able to get a better percentage from producers, but who’s to say producers can deliver? Both sides derive their power from scarcity: of distribution channels, or talent, of production capacity, etc. Of course, the net is all about abundance, not scarcity. As TV viewership drops year over year, more eyes are going onto a medium that so far has resisted attempts to control or limit it.

Do the writers or producers really realize what they’re ultimately up against? There are thousands, perhaps millions of people generating content in various forms on the net. Not all of it is good, maybe most of it isn’t, but do you really want to bet your career on the premise that you’re funnier or more dramatic than 100,000 rivals?

Developing entertainment for large, broad audiences made sense when there were only three broadcast networks, but now, niches rule. And the successful creators may be the ones who thrive best finding small, intensely loyal audiences.

Adamson’s Second Law is that technological media revolutions proceed or repeat in order of bandwidth: data, text, still images, audio, and video. Take a phenomenon and you can see how this works. What can you store on your mobile phone? First just some speed-dial numbers, then an address book (text), then wallpapers and photos (images), then MP3s, and now video. What can you send over a network, store on a sneakernet device, or manipulate in a web context? Every time, it goes in this order.

So here’s a revolution for you: making a living off internet-delivered content. We already have bloggers who support themselves off their writing (text), and a few webcomics creators do the same (Penny Arcade and Megatokyo, for example). There are podcasters and other audio creators paying the bills entirely with their online work (Brian Ibbott of Coverville is one; Leo Laporte may be another example, depending on how much of his work is in podcasting and how much is as talent in traditional electronic media). If there aren’t already people supporting themselves with online video — Homestar Runner arguably counts — there will be soon. It’s inevitable.

Do I think the future of entertainment media is all Homestar Runner home businesses? No, at least not entirely. But I think the cat’s out of the bag, and the traditional producers don’t have the market cornered on eyeballs anymore. The new world is going to be a lot more varied, a lot more anarchic, and given the idea that unions are effectively collectives to monopolize labor in order to drive up its price, the online media world is going to route around that like the damage it is.

Touring QTKit capture devices

Working on QTKitCaptureExperiment2. The goal for this one is to have multiple windows, each potentially capturing from a different source (something QTKit can do and QuickTime’s old SequenceGrabber can’t). Once I got the device iteration working and populating the NSPopUpButton, I plugged in some of my cameras.

Turns out some combinations work, and some don’t. Here’s how things look with my FireWire iSight and a UVC Logitech cam:

QTKitCaptureExperiment2 with 2 video capture devices

Nifty, huh? Now plug in a Canon camcorder to the other FireWire port:

QTKitCaptureExperiment2 with 3 capture devices

I just got this sort of working, and it’s already past bedtime, so I haven’t had time to figure this out further… whether it’s just the ZR25, or whether it’s having two DV devices on the FireWire or what.

Here’s the relevant code for creating the NSDictionary that populates the popup menu:

- (void)buildNamesToDevicesDictionaryForMediaType: (NSString *)mediaType 
				includeMuxDevices:(BOOL)includeMux {
	if (namesToDevicesDictionary == nil) {
		namesToDevicesDictionary = [[NSMutableDictionary alloc] init];
	}
	[namesToDevicesDictionary removeAllObjects];
	
	// add default device first
	QTCaptureDevice *defaultDevice =
		[self getDefaultCaptureDeviceForMediaType: mediaType];
	// TODO: should be Default (device_name)
	NSString *defaultDeviceName =
		[NSString stringWithFormat: @"Default (%@)",
			[defaultDevice localizedDisplayName]];
	[namesToDevicesDictionary setObject: defaultDevice forKey: defaultDeviceName];
	
	// then find the rest
	NSArray* devicesWithMediaType =
		[QTCaptureDevice inputDevicesWithMediaType:mediaType];
	NSArray* devicesWithMuxType =
		[QTCaptureDevice inputDevicesWithMediaType:QTMediaTypeMuxed];
	NSMutableSet* devicesSet = [NSMutableSet setWithArray: devicesWithMediaType];
	[devicesSet addObjectsFromArray: devicesWithMuxType];

	// add all devices from set to dictionary
	NSEnumerator *enumerator = [devicesSet objectEnumerator];
	id value;
	while ((value = [enumerator nextObject])) {
		QTCaptureDevice *device = (QTCaptureDevice*) value;
		[namesToDevicesDictionary setObject: device
				forKey: [device localizedDisplayName]];
	}

}


- (QTCaptureDevice*)getDefaultCaptureDeviceForMediaType: (NSString *)mediaType {
	NSLog (@"getDefaultCaptureDevice");
	// set up the default device
	QTCaptureDevice *foundDevice =
		[QTCaptureDevice defaultInputDeviceWithMediaType: mediaType];
	NSLog (@"got default device %@", foundDevice);
	// try for a muxed device (eg, a dv camcorder) if that was nil
	if (foundDevice == nil)
		foundDevice =
			[QTCaptureDevice defaultInputDeviceWithMediaType: QTMediaTypeMuxed];
	return foundDevice;
}

Update (10 minutes later) OK, the issue seems to be the iSight and the Canon camcorder fighting over the FireWire bus, I guess. Unplugging the iSight lets me see the others, and bring each up as the default device. Here’s the little notebook cam with a shot from the floor (laptop-size cable wasn’t long enough to get it in position for a decent shot):

QTKitCaptureExperiment2 with UVC laptop cam and camcorder

And here’s the view when just the camcorder is plugged in:

QTKitCaptureExperiment2 with only a camcorder plugged in

Some days I really hate Subversion

svn: Directory '/Users/cadamson/dev/svn-
experiments/QTKitCaptureExperiment2/build/
QTKitCaptureExperiment2.build/QTKitCaptureExperiment2.pbxindex
/strings.pbxstrings/.svn' containing working copy admin
area is missing

Delete the project and check it out again, this particular .svn directory is still missing. svn:ignore the whole build directory, and it still complains. Try to remove the entire project from svn, and it still complains. This stupid error has apparently poisoned the repository forever, with no apparent recourse. Thanks, Subversion. Real frickin’ great.

ADC’s “Legacy Document” layer hack

Check out the weirdness when you find documentation on ADC for deprecated APIs, like this page about SoundManager:

ADC legacy-document overlay example
Click for full-size image

Assuming you can’t read the overlay — you pretty much have to scroll around until you can get some white under it — it says Important: Sound Input Manager is deprecated as of Mac OS X v10.5. For new audio development in Mac OS X, use Core Audio. See the Audio page in the ADC Reference Library

The idea is fine, but the implementation is atypically bad, since you can’t really read it. Isn’t there a way to sneak a semi-opaque background under the overlay text?

There must be a better way to do this (of course there is, and I discovered it after the fact)

So, with the obscure musical tastes and all, I noticed a prog rock band that I really like, Be Bop Deluxe, were featured in a 1978 vintage concert performance on XM a few months ago. I recorded it off DirecTV into my PowerBook, but I wanted to listen to it in iTunes like the rest of my music. Moreover, I wanted to split up the songs like any other album in iTunes. So, begin with the end in mind:

Be Bop Deluxe concert cut into separate iTunes tracks

How do you cut the single big AIFF into separate AIFFs, frame-accurately, since the live concert will segue one song into the next, and won’t be tolerant of sloppy edits?

I don’t think QuickTime Player Pro would be a good choice, because the scrubber doesn’t give you much accuracy in setting your in and out points, and converting your last out point into a new in point would be easily botched.

I used Soundtrack, but my technique turned out to be far less efficient than it could have been. What I did was to use the blade tool to slice the source AIFF at each song break, then put each song in its own track, like this:

Slicing a long AIFF into individual tracks in Soundtrack

I then used the mute buttons to solo each track, set the end-of-song right at the end of the track, and exported to AIFF.

This is still error prone if you don’t nicely line up the playhead right at the end of the track, or worse yet, you do so, but forget to actually do “Set end-of-song”. Being off by a pixel either way, even at high zoom levels, creates an audio glitch (either an up-cut or a tiny dropout). Fortunately, Soundtrack is non-destructive, so botched edits are easily remedied. But it would be nice if it weren’t so easy to botch the edits in the first place.

So, here’s a better practice. Blade the edit points, but don’t move them to separate tracks. Instead, select each segment, right click, and do “Save Clip As…”:

Cutting up a long AIFF and exporting the segments in place

Provided you don’t move any of the segments, this should eliminate any problem with bad cut-offs.

With either approach, you then take the resulting AIFFs, import them into iTunes, tag with the album title, artist, track X of Y values, and whatever art you can scrounge, and convert to AAC or MP3.

Well, right way or wrong, it worked, and it’ll go into my iPod with the next sync. And yes, I know this is technically illegal, and as soon as the King Biscuit rights-holders care to release this concert on CD (I have their 10cc in Concert CD), I’ll be happy to trash my version and buy theirs. All the work to capture, cut up, and tag this concert is a good reminder of the fact that piracy is actually a hassle, and most of us will gladly pay for someone else to do the work for us.