Rss

Developers should be content experts too

I spent much of the weekend at Anime Weekend Atlanta, which is more or less my annual holiday. I love the enthusiastic crowd, the smart panels, the excitement of the new and novel, etc. It’s also nice being in a crowd that’s mostly young and has gotten thoroughly gender-balanced over the years.

It’s also interesting as a dive into the content side, as the whole point of the exercise is a massive indulgence in media viewing and production. I attended a podcasting panel that was supposed to feature the Anime World Order podcast, but they were simultaneously scheduled to record another session, so they had the Ninja Consultants fill in. Which is fine, because I like Erin and Noah from the NC better anyways. Afterwards, we had a good chat, and I mentioned that I had un-podfaded by resuming the Fullmetal Alchemist podcast that I’ve done off and on for a year and a half.

It’s not like I have tons of time for the podcast, mind you, but as I’ve been reorganizing my thinking around putting more of my cycles into media development, I realized something: being a media content developer makes me a better media software developer.

In doing the podcast, I’ve used a couple different tools: for the first episode, I used Final Cut Express because it and iMovie were the only apps I had that supported multitrack audio (even though they were clearly inappropriate for audio-only production). I then moved on to GarageBand for a long time, and then moved up to Soundtrack (which came with FCE HD), which is what I use now.

And in using GB and Soundtrack, I started seriously leaning on segment-level editing and volume envelopes… which of course led me to think about how those features are implemented in software. The volume envelope — the little “connect the dots” line under a track’s wave form, which can be used to raise or lower volume over a period of time — can be accomplished with tweening, and that’s what got me interested enough to dig into tweens and port Apple’s QuickTime volume tween example to QuickTime For Java.

Similarly, as I moved segments around the tracks, I wondered how these were being managed within their container. After some thinking, I realized that it could all be done with track edits, but the QuickTime API doesn’t seem to provide straightforward access to a track’s edit list (short of touring the atoms yourself, with a copy of the QuickTime File Format documentation by your side). But as part of a consulting gig to help a company that wanted to some server-side editing, and wanting to prove that managing your movies in memory is a good approach, I finally dug in enough to find a good way to tour the edit list: you call GetTrackNextInterestingTime, with the behavior flag nextTimeMediaEdit to indicate that what you’re looking for are edits.

Here’s a QuickTime for Java routine that exposes all the edits you’ve made in a track, presumably after a number of calls to Track.insertSegment():

// simple routine to count the number of edits in the
// target video track.
private int countTargetEdits () throws QTException {
    if (! targetMovieDirty)
        return 0;
    int edits = 0;
    int currentTime = 0;
    TimeInfo ti;
    ti = targetVideoTrack.getNextInterestingTime 
        (StdQTConstants.nextTimeTrackEdit, 
             currentTime,
         1);
    while (ti.time != -1) {
        System.out.println ("adding edit. time = " + ti.time +
                            ", duration = " + ti.duration);
        edits++;
        currentTime = ti.time;

        // get the next edit
        ti = targetVideoTrack.getNextInterestingTime 
            (StdQTConstants.nextTimeTrackEdit, 
             currentTime,
             1);
    }
    return edits;
}

In my QTJ book, I conclude the chapter on editing by showing how to do a low-level edit (ie, a segment insert), but I really don’t show the point of it, and I think I leave the impression that copy-and-paste is more valuable. But having used the pro apps at a deeper level, I’ve got a greater appreciation for the value of the low-level editing API.

And that’s the lesson I take away from this: there’s only so much you can learn from reading others’ documentation. I see this in the Java media world, when so many people write the same things over and over again about using Java Sound to load and play a Clip, and never getting into how to handle audio that’s too big to load entirely into memory (because Java Sound makes that really fricking hard, and few people have actually done it). Similarly, people who write about JMF and its lovely state machine of prefetched-fetched-realized-started are probably working from the docs, and haven’t done enough actual development with JMF to realize that it doesn’t do anything useful.

Reading the docs only gets you so far. If you’re going to write serious code, and write about coding, I think it really helps to be a content matter expert, and that means using the same kinds of apps that you intend to write. It gives you a deep affinity for your customers’ likely needs. And that’s why I’m podcasting again.

Oh, and I just found a nice technique for cross-fading my ambient room noise to hide the audio edits in the parts where I’m talking for like five minutes straight…

Comments (2)

  1. […] I’ve said before, developers should be content experts, and for me, that means getting deeper into media production so I have a better affinity to what […]

  2. […] after about five hours over three nights. Given the long lull in producing regular episodes (see Developers should be content experts too for why I’m trying to restart), I’d forgotten just how much time goes into it, […]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.