Archives for : July2010

From iPhone Media Library to PCM Samples in Dozens of Confounding, Potentially Lossy Steps

iPhone SDK 3.0 provided limited access to the iPod Music Library on the device, allowing third party apps to search for songs (and podcasts and audiobooks, but not video), inspect the metadata, and play items, either independently or in concert with the built-in media player application. But it didn’t provide any form of write-access — you couldn’t add items or playlists, or alter metadata, from a third-party app. And it didn’t allow for third-party apps to do anything with the songs except play them… you couldn’t access the files, convert them to another format, run any kind of analysis on the samples, and so on.

So a lot of us were surprised by the WWDC keynote when iMovie for iPhone 4 was shown importing a song from the iPod library for use in a user-made video. We were even more surprised by the subsequent claim that everything in iMovie for iPhone 4 was possible with public APIs. Frankly, I was ready to call bullshit on it because of the iPod Library issue, but was intrigued by the possibility that maybe you could get at the iPod songs in iOS 4. A tweet from @ibeatmaker confirmed that it was possible, and after some clarification, I found what I needed.

About this time, a thread started on coreaudio-api about whether Core Audio could access iPod songs, so that’s what I set out to prove one way or another. So, my goal was to determine whether or not you could get raw PCM samples from songs in the device’s music library.

The quick answer is: yes. The interesting answer is: it’s a bitch, using three different frameworks, coding idioms that are all over the map, a lot of file-copying and possibly some expensive conversions.

It’s Just One Property; It Can’t Be That Hard

The big secret of how to get to the Music Library isn’t much of a secret. As you might expect, it’s in the MediaLibrary.framework that you use to interact with the library. Each song/podcast/audiobook is a MPMediaItem, and has a number of interesting properties, most of which are user-managed metadata. In iOS 4, there’s a sparkling new addition to the the list of “General Media Item Property Keys”: MPMediaItemPropertyAssetURL. Here’s the docs:

A URL pointing to the media item, from which an AVAsset object (or other URL-based AV Foundation object) can be created, with any options as desired. Value is an NSURL object.

The URL has the custom scheme of ipod-library. For example, a URL might look like this:


OK, so we’re off and running. All we need to do is to pick an MPMediaItem, get this property as an NSURL, and we win.

Or not. There’s an important caveat:

Usage of the URL outside of the AV Foundation framework is not supported.

OK, so that’s probably going to suck. But let’s get started anyways. I wrote a throwaway app to experiment with all this stuff, adding to it piece by piece as stuff started working. I’m posting it here for anyone who wants to reuse my code… all my classes are marked as public domain, so copy-and-paste as you see fit.

Note that this code must be run on an iOS 4 device and cannot be run in the Simulator, which doesn’t support the Media Library APIs.

The app just starts with a “Choose Song” button. When you tap it, it brings up an MPMediaPickerController as a modal view to make you choose a song. When you do so, the -mediaPicker:didPickMediaItems: delegate method gets called. At this point, you could get the first MPMediaItem and get its MPMediaItemPropertyAssetURL media item property. I’d hoped that I could just call this directly from Core Audio, so I wrote a function to test if a URL can be opened by CA:

BOOL coreAudioCanOpenURL (NSURL* url) {
	OSStatus openErr = noErr;
	AudioFileID audioFile = NULL;
	openErr = AudioFileOpenURL((CFURLRef) url,
		 kAudioFileReadPermission ,
	if (audioFile) {
		AudioFileClose (audioFile);
	return openErr ? NO : YES;

Getting a NO back from this function more or less confirms the caveat from the docs: the URL is only for use with the AV Foundation framework.

AV for Vendetta

OK, so plan B: we open it with AV Foundation and see what that gives us.

AV Foundation — setting aside the simple player and recorder classes from 3.0 — is a strange and ferocious beast of a framework. It borrows from QuickTime and QTKit (the capture classes have an almost one-to-one correspondence with their QTKit equivalents), but builds on some new metaphors and concepts that will take the community a while to digest. For editing, it has a concept of a composition, which is made up of tracks, which you can create from assets. This is somewhat analogous to QuickTime’s model that “movies have tracks, which have media”, except that AVFoundation’s compositions are themselves assets. Actually, reading too much QuickTime into AV Foundation is a good way to get in trouble and get disappointed; QuickTime’s most useful functions, like AddMediaSample() and GetMediaNextInterestingTime() are antithetical to AV Foundation’s restrictive design (more on that in a later blog) and therefore don’t exist.

Back to the task at hand. The only thing we can do with the media library URL is to open it in AVFoundation and hope we can do something interesting with it. The way to do this is with an AVURLAsset.

NSURL *assetURL = [song valueForProperty:MPMediaItemPropertyAssetURL];
AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetURL options:nil];

If this were QuickTime, we’d have an object that we could inspect the samples of. But in AV Foundation, the only sample-level access afforded is a capture-time opportunity to get called back with video frames. There’s apparently no way to get to video frames in a file-based asset (except for a thumbnail-generating method that operates on one-second granularity), and no means of directly accessing audio samples at all.

What we can do is to export this URL to a file in our app’s documents directory, hopefully in a format that Core Audio can open. AV Foundation’s AVAssetExportSession has a class method exportPresetsCompatibleWithAsset: that reveals what kinds of formats we can export to. Since we’re going to burn the time and CPU of doing an export, it would be nice to be able to convert the compressed song into PCM in some kind of useful container like a .caf, or at least an .aif. But here’s what we actually get as options:

compatible presets for songAsset: (

So, no… there’s no “output to CAF”. In fact, we can’t even use AVAssetExportPresetPassthrough to preserve the encoding from the music library: we either have to convert to an AAC (in an .m4a container), or to a QuickTime movie (represented by all the presets ending in “Quality”, as well as the “640×480”).

This Deal is Getting Worse All the Time!

So, we have to export to AAC. That’s not entirely bad, since Core Audio should be able to read AAC in an .m4a container just fine. But it sucks in that it will be a lossy conversion from the source, which could be MP3, Apple Lossless, or some other encoding.

In my GUI, an “export” button appears when you pick a song, and the export is kicked off in the event-handler handleExportTapped. Here’s the UI in mid-export:

MediaLibraryExportThrowaway1 UI in mid-export

To do the export, we create an AVExportSession and provide it with an outputFileType and outputIURL.

AVAssetExportSession *exporter = [[AVAssetExportSession alloc]
		initWithAsset: songAsset
		presetName: AVAssetExportPresetAppleM4A];
NSLog (@"created exporter. supportedFileTypes: %@", exporter.supportedFileTypes);
exporter.outputFileType = @"";
NSString *exportFile = [myDocumentsDirectory()
		stringByAppendingPathComponent: @"exported.m4a"];
[exportURL release];
exportURL = [[NSURL fileURLWithPath:exportFile] retain];
exporter.outputURL = exportURL;	

A few notes here. The docs say that if you set the outputURL without setting outputFileType that the exporter will make a guess based on the file extension. In my experience, the exporter prefers to just throw an exception and die, so set the damn type already. You can get a list of possible values from the class method exporter.supportedFileTypes. The only supported value for the AAC export is Also note the call to a myDeleteFile() function; the export will fail if the target file already exists.

Aside: I did experiment with exporting as a QuickTime movie rather than an .m4a; the code is in the download, commented out. Practical upshot is that it sucks: if your song isn’t AAC, then it gets converted to mono AAC at 44.1 KHz. It’s also worth noting that AV Foundation doesn’t give you any means of setting export parameters (bit depths, sample rates, etc.) other than using the presets. If you’re used to the power of frameworks like Core Audio or the old QuickTime, this is a bitter, bitter pill to swallow.

Block Head

The code gets really interesting when you kick off the export. You would probably expect the export, a long-lasting operation, to be nice and asynchronous. And it is. You might also expect to register a delegate to get asynchronous callbacks as the export progresses. Not so fast, Bucky. As a new framework, AV Foundation adopts Apple’s latest technologies, and that includes blocks. When you export, you provide a completion handler, a block whose no-arg function is called when necessary by the exporter.

Here’s what mine looks like.

// do the export
[exporter exportAsynchronouslyWithCompletionHandler:^{
	int exportStatus = exporter.status;
	switch (exportStatus) {
		case AVAssetExportSessionStatusFailed: {
			// log error to text view
			NSError *exportError = exporter.error;
			NSLog (@"AVAssetExportSessionStatusFailed: %@",
			errorView.text = exportError ?
				[exportError description] : @"Unknown failure";
			errorView.hidden = NO;
		case AVAssetExportSessionStatusCompleted: {
			NSLog (@"AVAssetExportSessionStatusCompleted");
			fileNameLabel.text =
				[exporter.outputURL lastPathComponent];
			// set up AVPlayer
			[self setUpAVPlayerForURL: exporter.outputURL];
			[self enablePCMConversionIfCoreAudioCanOpenURL:
		case AVAssetExportSessionStatusUnknown: {
			NSLog (@"AVAssetExportSessionStatusUnknown"); break;}
		case AVAssetExportSessionStatusExporting: {
			NSLog (@"AVAssetExportSessionStatusExporting"); break;}
		case AVAssetExportSessionStatusCancelled: {
			NSLog (@"AVAssetExportSessionStatusCancelled"); break;}
		case AVAssetExportSessionStatusWaiting: {
			NSLog (@"AVAssetExportSessionStatusWaiting"); break;}
		default: { NSLog (@"didn't get export status"); break;}

This kicks off the export, passing in a block with code to handle all the possible callbacks. The completion handler function doesn’t have to take any arguments (nor do we have to set up a “user info” object for the exporter to pass to the function), since the block allows anything in the local scope to be called from the block. That means the exporter and its state don’t need to be passed in as parameters, because the exporter is a local variable that can be accessed from the block and its state inspected via method calls.

The two messages I handle in my block are AVAssetExportSessionStatusFailed, which dumps the error to a previously-invisible text view, and AVAssetExportSessionStatusCompleted, which sets up an AVPlayer to play the exported audio, which we’ll get to later.

After starting the export, my code runs an NSTimer to fill a UIProgressView. Since the exporter has a progress property that returns a float, it’s pretty straightforward… check the code if you haven’t already done this a bunch of times. Files that were already AAC export almost immediately, while MP3s and Apple Lossless (ALAC) took a minute or more to export. Files in the old .m4p format, from back when the iTunes Store put DRM on all the songs, fail with an error, as seen below.

The Invasion of Time

Kind of as a lark, I added a little GUI to let you play the exported file. AVPlayer was the obvious choice for this, since it should be able to play whatever kind of file you export (.m4a, .mov, whatever).

This brings up the whole issue of how to deal with the representation of time in AV Foundation, which turns out to be great for everyone who ever used the old C QuickTime API (or possibly QuickTime for Java), and all kinds of hell for everyone else.

AV Foundation uses Core Media’s CMTime struct for representing time. In turn, CMTime uses QuickTime’s brilliant but tricky concept of time scales. The idea, in a nutshell, is that your units of measurement for any particular piece of media are variable: pick one that suits the media’s own timing needs. For example, CD audio is 44.1 KHz, so it makes sense to measure time in 1/44100 second intervals. In a CMTime, you’d set the timescale to 44100, and then a given value would represent some number of these units: a single sample would have a value of 1 and would represent 1/44100 of a second, exactly as desired.

I find it’s easier to think of Core Media (and QuickTime) timescales as representing “nths of a second”. One of the clever things you can do is to choose a timescale that suits a lot of different kinds of media. In QuickTime, the default timescale is 600, as this is a common multiple of many important frame-rates: 24 fps for film, 25 fps for PAL (European) TV, 30 fps for NTSC (North America and Japan) TV, etc. Any number of frames in these systems can be evenly and exactly represented with a combination of value and timescale.

Where it gets tricky is when you need to work with values measured in different timescales. This comes up in AV Foundation, as your player may use a different timescale than the items it’s playing. It’s pretty easy to write out the current time label:

CMTime currentTime = player.currentTime;
UInt64 currentTimeSec = currentTime.value / currentTime.timescale;
UInt32 minutes = currentTimeSec / 60;
UInt32 seconds = currentTimeSec % 60;
playbackTimeLabel.text = [NSString stringWithFormat:
		@"%02d:%02d", minutes, seconds];

But it’s hard to update the slider position, since the AVPlayer and the AVPlayerItem it’s playing can (and do) use different time scales. Enjoy the math.

if (player && !userIsScrubbing) {
	CMTime endTime = CMTimeConvertScale (player.currentItem.asset.duration,
	if (endTime.value != 0) {
		double slideTime = (double) currentTime.value /
				(double) endTime.value;
		playbackSlider.value = slideTime;

Basically, the key here is that I need to get the duration of the item being played, but to express that in the time scale of the player, so I can do math on them. That gets done with the CMTimeConvertScale() call. Looks simple here, but if you don’t know that you might need to do a timescale-conversion, your math will be screwy for all sorts of reasons that do not make sense.

Oh, you can drag the slider too, which means doing the same math in reverse.

-(IBAction) handleSliderValueChanged {
	CMTime seekTime = player.currentItem.asset.duration;
	seekTime.value = seekTime.value * playbackSlider.value;
	seekTime = CMTimeConvertScale (seekTime, player.currentTime.timescale,
	[player seekToTime:seekTime];

One other fun thing about all this that I just remembered from looking through my code. The time label and slider updates are called from an NSTimer. I set up the AVPlayer in the completion handler block that’s called by the exporter. This call seems not to be on the main thread, as my update timer didn’t work until I forced its creation over to the main thread with performSelectorOnMainThread:withObject:waitUntilDone:. Good times.

Final Steps

Granted, all this AVPlayer stuff is a distraction. The original goal was to get from iPod Music Library to decompressed PCM samples. We used an AVAssetExportSession to produce an .m4a file in our app’s Documents directory, something that Core Audio should be able to open. The remaining conversion is a straightforward use of CA’s Extended Audio File Services: we open an ExtAudioFileRef on the input .m4a, set a “client format” property representing the PCM format we want it to convert to, read data into a buffer, and write that data back out to a plain AudioFileID. It’s C, so the code is long, but hopefully not too hard on the eyes:

-(IBAction) handleConvertToPCMTapped {
	NSLog (@"handleConvertToPCMTapped");
	// open an ExtAudioFile
	NSLog (@"opening %@", exportURL);
	ExtAudioFileRef inputFile;
	CheckResult (ExtAudioFileOpenURL((CFURLRef)exportURL, &inputFile),
				 "ExtAudioFileOpenURL failed");
	// prepare to convert to a plain ol' PCM format
	AudioStreamBasicDescription myPCMFormat;
	myPCMFormat.mSampleRate = 44100; // todo: or use source rate?
	myPCMFormat.mFormatID = kAudioFormatLinearPCM ;
	myPCMFormat.mFormatFlags =  kAudioFormatFlagsCanonical;	
	myPCMFormat.mChannelsPerFrame = 2;
	myPCMFormat.mFramesPerPacket = 1;
	myPCMFormat.mBitsPerChannel = 16;
	myPCMFormat.mBytesPerPacket = 4;
	myPCMFormat.mBytesPerFrame = 4;
	CheckResult (ExtAudioFileSetProperty(inputFile,
			sizeof (myPCMFormat), &myPCMFormat),
		  "ExtAudioFileSetProperty failed");

	// allocate a big buffer. size can be arbitrary for ExtAudioFile.
	// you have 64 KB to spare, right?
	UInt32 outputBufferSize = 0x10000;
	void* ioBuf = malloc (outputBufferSize);
	UInt32 sizePerPacket = myPCMFormat.mBytesPerPacket;	
	UInt32 packetsPerBuffer = outputBufferSize / sizePerPacket;
	// set up output file
	NSString *outputPath = [myDocumentsDirectory() 
	NSURL *outputURL = [NSURL fileURLWithPath:outputPath];
	NSLog (@"creating output file %@", outputURL);
	AudioFileID outputFile;
		  "AudioFileCreateWithURL failed");
	// start convertin'
	UInt32 outputFilePacketPosition = 0; //in bytes
	while (true) {
		// wrap the destination buffer in an AudioBufferList
		AudioBufferList convertedData;
		convertedData.mNumberBuffers = 1;
		convertedData.mBuffers[0].mNumberChannels = myPCMFormat.mChannelsPerFrame;
		convertedData.mBuffers[0].mDataByteSize = outputBufferSize;
		convertedData.mBuffers[0].mData = ioBuf;

		UInt32 frameCount = packetsPerBuffer;

		// read from the extaudiofile
		CheckResult (ExtAudioFileRead(inputFile,
			 "Couldn't read from input file");
		if (frameCount == 0) {
			printf ("done reading from file");
		// write the converted data to the output file
		CheckResult (AudioFileWritePackets(outputFile,
			   outputFilePacketPosition / myPCMFormat.mBytesPerPacket, 
			 "Couldn't write packets to file");
		NSLog (@"Converted %ld bytes", outputFilePacketPosition);

		// advance the output file write location
		outputFilePacketPosition +=
			(frameCount * myPCMFormat.mBytesPerPacket);
	// clean up

	// GUI update omitted

Note that this uses a CheckResult() convenience function that Kevin Avila wrote for our upcoming Core Audio book… it just looks to see if the return value is noErr and tries to convert it to a readable four-char-code if it seems amenable. It’s in the example file too.

Is It Soup Yet?

Does all this work? Rather than inspecting the AudioStreamBasicDescription of the resulting file, let’s do something more concrete. With Xcode’s “Organizer”, you can access your app’s sandbox on the device. So we can just drag the Application Data to the Desktop.

In the resulting folder, open the Documents folder to find export-pcm.caf. Drag it to QuickTime Player to verify that you do, indeed, have PCM data:

So there you have it. In several hundred lines of code, we’re able to get a song from the iPod Music Library, export it into our app’s Documents directory, and convert it to PCM. With the raw samples, you could now draw an audio waveform view (something you’d think would be essential for video editors who want to match video to beats in the music, but Apple seems dead-set against letting us do do with AV Foundation or QTKit), you could perform analysis or effects on the audio, you could bring it into a Core Audio AUGraph and mix it with other sources… all sorts of possibilities open up.

Clearly, it could be a lot easier. It’s a ton of code, and two file exports (library to .m4a, and .m4a to .caf), when some apps might be perfectly happy to read from the source URL itself and never write to the filesystem… if only they could. Having spent the morning writing this blog, I may well spend the afternoon filing feature requests on I’ll update this blog with OpenRadar numbers for the following requests:

  • Allow Core Audio to open URLs provided by MediaLibrary’s MPMediaItemPropertyAssetURL
  • AV Foundation should allow passthrough export of Media Library items
  • AV Foundation export needs finer-grained control than just presets
  • Provide sample-level access for AVAsset

Still, while I’m bitching and whining, it is remarkable that iOS 4 opens up non-DRM’ed items in the iPod library for export. I never thought that would happen. Furthermore, the breadth and depth of the iOS media APIs remain astonishing. Sometimes terrifying, perhaps, but compared to the facile and trite media APIs that the other guys and girls get, we’re light-years ahead on iOS.

Have fun with this stuff!

Update: This got easier in iOS 4.1. Please forget everything you’ve read here and go read From iPod Library to PCM Samples in Far Fewer Steps Than Were Previously Necessary instead.