Rss

Archives for : avfoundation

Brain Dump: Capturing from an iOS Device in Wirecast

So, with the book nearly done (currently undergoing copy-editing and indexing), I’m using some of my time to get my livestreaming plans together. What I’m likely to so is give the “build” section of the show over to working through examples from the book, so those will be archived as video lessons. Then, along with the interstitials of conference updates, fun videos from the anime fan community, and and a read-through of the Muv-Luv visual novels, I’ll be doing a bunch of Let’s Plays of mostly iOS games.

I did this with the first two test episodes: Tanto Cuore in Test Episode 1 and Love Live! School Idol Project in Test Episode 2. To do this, I need to be able to capture video from an iOS device and ingest it into Wirecast, so I can stream it.

Over the years, I’ve used different techniques for this, and decided to take some time today to figure out which works best on Wirecast for Mac. So, after the jump, behold the results of this project, plus instructions on how to configure each approach.

Continue Reading >>

Spring 2017 Conferences

Quick note, before Early Bird pricing ends. I’m speaking at two conferences this Spring.

I’ll be at Forward Swift in San Francisco on March 2. There, I’m doing a talk called “Audio Frameworks and Swift: This Is Fine”. The idea of the talk is to look at how well Swift does and doesn’t work as a language for calling the iOS and Mac audio frameworks. This covers things like how to call the C-based frameworks (Audio Toolbox and the other higher-level parts of Core Audio) from Swift, and where you get into some real mismatches between the languages, and what to do about it. I covered this phenomenon on the blog a while back in Radio on the TV.

My plan is to write an audio reverser app to demo this, as I don’t think there’s a good way to do that in AV Foundation, meaning you’d want to use either Audio Converter Services or Extended Audio Files from Audio Toolbox. Plus, playing music backwards should make for a fun demo.

I’ll also be covering v3 Audio Units, which specifically prohibits you from using Swift in the “kernel” of your AU, since that’s called on a realtime thread and there are all sorts of ways that Swift is not quite yet ready for that kind of use, even though it’s billed as being a systems programming language. I’ll try to make this talk more about the language — what it can and can’t/shouldn’t do, what it’s good and bad at — than the frameworks, to try to make it more approachable. I don’t want this to be a draw only for the people who’ve read the Core Audio book and happen to be in SF that week (if I wanted that, we could just get a table at Super Duper and chat over burgers and beer).

Forward Swift early bird registration ends tomorrow, so hop on it if you’re so inclined.

I’ll be doing this talk again at CocoaConf Chicago on April 21-22, along with the Firebase talk I did at CocoaConfs DC and San Jose last Fall.

CocoaConf’s early bird ends on February 25.

Hope to see you at one or both of these.

Entropy

Confession: I have no idea whether the code examples from Learning Core Audio work on El Capitan and iOS 9. Maybe? Probably most of them? But I’m in a really conflicted state with where that book is.

The book came out in early 2012, which now makes it about four years old. It took about two years off and on to write, 2010 and 2011, with a big push to wrap it up at the end of 2011 because our editor was leaving Pearson to go to Apple. Looking at my mail history, I was approached about replacing Mike Lee on the book in late 2009, so the small amount of material that he and Kevin Avila wrote probably dates back to earlier in that year.

The point of this all being, the book is old now. The stated system requirements are Xcode 4.2, Lion (Mac OS X 10.7), and iOS 5. The examples in the first few chapters that use Foundation instead of Core Foundation actually use manual retain, release and the NSAutoreleasePool because the book largely pre-dates ARC (we did finally ARC-ify those examples in the April 2014 update to the downloadable sample code, at the cost of no longer matching the written material in the book).

So now what?

Continue Reading >>

AV WWDC, part 2: Fair Is Pretty Foul

Next up on our tour of WWDC 2015 media sessions is the innocently-titled Content Protection for HTTP Live Streaming. Sounds harmless, but I think there’s reason for worry.

For content protections, HLS has always had a story: transport segments get a one-time AES encryption, and can be served from a dumb http server (at CocoaConf a few years back, I demo’ed serving HLS from Dropbox, before it was https:-always). You’re responsible for guarding the keys and delivering them only to authenticated users. AV Foundation can get the keys, decrypt the segments, and play them with no client-side effort beyond handling the authentication. It’s a neat system, because it’s easy to deploy on content delivery networks, as you’re largely just dropping off a bunch of flat files, and the part you protect on your own server is tiny.

So what’s “FairPlay Streaming”, then?

Continue Reading >>

AV WWDC, part 1: Hot Dog… The AVMovie

I attended WWDC for the first time since 2011, thanks largely to the fact that working for Rev means I need to go out to the office in San Francisco every 6 weeks anyways, so why not make it that week and put my name in the ticket lottery. I probably won’t make a habit of returning to WWDC, and the short supply of tickets makes that a given anyways, but it was nice to be back just this once.

Being there for work, my first priority was making use of unique-to-attendee resources, like the one-on-one UI design reviews and the developers in the labs. The latter can be hit-or-miss based on your problem… we didn’t get any silver bullet for our graphics code, but scored a crucial answer in Core Audio. We’ve found we have to fall back to the software encoder because the hardware encoder (kAppleHardwareAudioCodecManufacturer) would cause ExtAudioFileWrite() to sometimes fail with OSStatus -66570 (kExtAudioFileError_AsyncWriteBufferOverflow). So I asked about that and was told “oh yeah, we don’t support hardware encoding anymore… the new devices don’t need it and the property is just ignored”. I Slacked this to my boss and his reaction was “would be nice if that were in the documentation!” True enough, but at least that’s one wall we can stop banging our head against.

Speaking of media, now that everyone’s had their fill of “Crusty” and the Protocol-Oriented Programming session, I’m going to post a few blogs about media-related sessions.

Continue Reading >>

AVMutableCocoaConfPresentationInstruction

I’m speaking at three of the five CocoaConfs for early 2014, teaching an all-day AV Foundation Film School class and a regular session on Stupid Video Tricks, which is also all about AV Foundation. (In DC, I also reprised Get on the Audiobus to fill in for another speaker).

UPDATE: I’m also going to do “Stupid Video Tricks” at next week’s Ann Arbor CocoaHeads.

I first taught the class in Chicago, and then added one more project for DC and San Jose based on how the timing worked out. To speed things up, I created starter projects that dealt with all the storyboard connections and drudge-work, leaving big holes in the code that say // TODO: WRITE IN CLASS for the stuff we do as a code-along. The class projects are:

  1. Play back a video file from a URL
  2. Capture into a video file (and play back in another tab, with the code from 1)
  3. Edit together clips and export as a new .m4v file, first as a cuts-only edit (easy), and then with cross-dissolved (quite painful and clearly marked as an hour of outright drudgery)
  4. Processing video frames at capture-time with Core Image

Continue Reading >>

DVDivvy

So a while back, you might remember me bitching about AV Foundation and presenting as my use-case for where the AVF-based QuickTime Player X comes up lacking, the technique by which I pull individual episode files out of DVD rips that produce a single two-hour title.

After my epic bitch-fest, I wrote:

But I also have a list on my whiteboard of fun projects I’m saving up to do as livestream code-alongs someday, and one is an AV Foundation based episode-splitter that would replace my cut-copy-paste routine from way above. Because really, it would be pretty simple to write an app that just lets me razor-slice the big file at each episode break, and then mass export them into separate files using some programmatic file-numbering system.

So, since writing that, I’ve allowed new anime DVDs to pile up without using my old QuickTime copy-and-paste technique, because I’ve wanted to actually write this app. Which means that True Tears and Lupin the Third: The Woman Called Fujiko Mine are sitting unwatched on the shelf, because they’re just DVDs and not .m4v‘s compatible with iPad and Apple TV. Not cool! I need my tragic schoolgirls and super sexy thieves!

So, on and off over the last week, I wrote DVDivvy

DVDivvy splitting up a ripped title

Continue Reading >>

CocoaConf Tour (Late 2013) and Core Audio video

A couple speaky/selly things real quick…

As mentioned in earlier posts, I’m speaking at all four of the upcoming CocoaConfs. I’m reprising my all-day tutorials:

  • iPad Productivity (UIDocument, autosave, iCloud, PDF/printing, inter-app doc exchange) in Portland (August) and Columbus (September)
  • Core Audio in Boston (October) and Atlanta (November)

I’m also doing two regular hour-long sessions, on Audiobus and A/V encoding. For Audiobus, feel free to abandon any angst that this much-loved third party tool for inter-application audio will be obsoleted and abandoned by Apple’s announced introduction of an inter-app audio framework in iOS 7. The Audiobus team announced that Audiobus will adopt Apple’s new APIs when running under iOS 7, meaning you’ll get compatibility with both Audiobus-enabled apps and those that use Apple’s new APIs. So it’s still well worth learning about if you’re into audio; I’m working on some demo code to show it off. Thinking I might bring back the Dalek ring modulator code from 360iDev a few years back and wrap it as an Audiobus effect (Hi Janie!)

Continue Reading >>

AV Foundation and the void

Yesterday I streamed some WWDC sessions while driving to meet with a client. At a stop, I posted this pissy little tweet:

It got enough quizzical replies (and a couple favorites), I figured I should elaborate as best I can, while staying away from all things NDA.

Part of what I’m reacting to comes from a habit of mine of deliberately seeking the unseen, which I picked up either from Musashi’s Book of Five Rings, or Bastiat’s essay Ce qu’on voit et ce qu’on ne voit pas (“What is Seen and What is Unseen”), because of course with me it’s going to either be samurai or economics, right? Anyways, the idea is to seek truth not in what you encounter, but what is obvious by its absence. It’s something I try to do when editing: don’t focus only on what’s there in the document, also figure out if anything should be there, and isn’t.

And when I look at AV Foundation on iOS and especially on OS X, I feel like there are a lot of things missing.

Continue Reading >>

2013 CocoaConf Tutorial Survey

Earlier this year, I wondered aloud about the habit of tech conferences to have a beginner-oriented all-day tutorial, and whether it would make sense to also have something for intermediate-to-advanced developers. Among the benefits of this approach would be to prevent a bifurcation of the attendees, where the beginners have started to know each other after the first day when suddenly “everyone else” arrives (and they all know one another anyways). Plus, it’s good for the conference and the hotel to sell an extra night of rooms. So, in the spirit of that, we did an all-day Core Audio tutorial at CocoaConf in Columbus, Portland, and Raleigh.

The first two were well attended, particularly Portland, where we had one attendee who’d come all the way from Denmark to attend. Raleigh was much smaller, possibly for any or all of the following reasons:

  • Overall turnout for CocoaConf Raleigh was lower than the other 2012 CocoaConfs
  • Competition for advanced developers from Bill Dudney’s Core Graphics all-day tutorial
  • Exhaustion of the small pool of developers seriously interested in Core Audio development

If it’s mostly the first two, then it may be worth doing Core Audio again in the early 2013 CocoaConfs, for the benefit of those who missed it this time. On the other hand, if the demand has already been sated, then maybe it’s time to put the Core Audio tutorial away and try something else.

But what else? As an iOS media programming guy, AV Foundation seems like an obvious choice. The downside is that I haven’t used AVF in anger (ie, for a paying client), so my depth of knowledge is basically what I’ve learned to do sessions on the topic and write some experimental code of my own. I got dinged on a CocoaConf Raleigh feedback form in 2011 for sounding like I was mostly repeating the developer documentation, and that’s not entirely unfair (and I hate the thought of just repeating the official line on a technology… why bother?). Now what I do bring to AVF is an understanding of the problem domain, expertise in encoding and production and stuff, and a deep knowledge of the QuickTime concepts that have carried over to AVF. In fact, I think what I don’t like about AVF are the places where it’s more limiting and less imaginative than the wild-and-wooly QuickTime. Put another way, void*‘s actually make me happy in a media API — because they’re placeholders for future functionality — and AVF doesn’t have as many of them.

What else could I pitch? The client work I’ve done for the last two years instead of AVF is all about iPad productivity, something I’ve wanted to try to push in 2013, as I think we’ve lost the thread of “iPad as creative device” a little bit. So, documents, files, inter-app communication, copy/paste, undo/redo… good useful stuff there, though none of it is necessarily stuff you couldn’t figure out yourself (ie, it’s not a problem domain like media where there’s knowledge you have to master outside of the APIs).

And of course, I’m on this livestreaming kick now, so maybe an all-day tutorial on livestream production, which would be part programming, part video production tutorial: competent lighting and sound, compression and bandwidth concerns, production software, server-side strategies, business concerns, client playback APIs, etc. About half of it would be like my grad school days, TA’ing the introductory video production class. So that would be very different and probably very nichey, but also probably crazy fun.

So those are the ideas I’m kicking around now. What I need is some idea of which of these — or something else — would sell enough seats to be worth my time and CocoaConf’s. What would the readership of this blog and Twitter/ADN feed be interested in? Let’s find out:

[iframe src=”https://docs.google.com/spreadsheet/embeddedform?formkey=dHdrMDZ3Ri15cHVzSHBmMkJmdWQ2U3c6MQ” width=”475″ height=”700″]