Rss

Archives for : April2009

An iPhone Core Audio brain dump

Twitter user blackbirdmobile just wondered aloud when the Core Audio stuff I’ve been writing about is going to come out. I have no idea, as the client has been commissioning a lot of work from a lot of iPhone/Mac writers I know, but has a lengthy review/rewrite process.

Right now, I’ve moved on to writing some beginner stuff for my next book, and will be switching from that to iPhone 3.0 material for the first book later today. And my next article is going to be on OpenAL. My next chance for some CA comes whenever I get time to work on some App Store stuff I’ve got planned.

So, while the material is still a little fresh, I’m going to post a stream-of-consciousness brain-dump of stuff that I learned along the way or found important to know in the course of working on this stuff.

  • It’s hard. Jens Alfke put it thusly:

    “Easy” and “CoreAudio” can’t be used in the same sentence. šŸ˜› CoreAudio is very powerful, very complex, and under-documented. Be prepared for a steep learning curve, APIs with millions of tiny little pieces, and puzzling things out from sample code rather than reading high-level documentation.

  • That said, tweets like this one piss me off. Media is intrinsically hard, and the typical way to make it easy is to throw out functionality, until you’re left with a play method and not much else.

  • And if that’s all you want, please go use the HTML5 <video> and <audio> tags (hey, I do).

  • Media is hard because you’re dealing with issues of hardware I/O, real-time, threading, performance, and a pretty dense body of theory, all at the same time. Webapps are trite by comparison.

  • On the iPhone, Core Audio has three levels of opt-in for playback and recording, given your needs, listed here in increasing order of complexity/difficulty:

    1. AVAudioPlayer – File-based playback of DRM-free audio in Apple-supported codecs. Cocoa classes, called with Obj-C. iPhone 3.0 adds AVAudioRecorder (wasn’t sure if this was NDA, but it’s on the WWDC marketing page).
    2. Audio Queues – C-based API for buffered recording and playback of audio. Since you supply the samples, would work for a net radio player, and for your own formats and/or DRM/encryption schemes (decrypt in memory before handing off to the queue). Inherent latency due to the use of buffers.
    3. Audio Units – Low-level C-based API. Very low latency, as little as 29 milliseconds. Mixing, effects, near-direct access to input and output hardware.
  • Other important Core API’s not directly tied to playback and recording: Audio Session Services (for communicating your app’s audio needs to the system and defining interaction with things like background iPod player, ring/silent switch) as well as getting audio H/W metadata, Audio File Services for reading/writing files, Audio File Stream Services for dealing with audio data in a network stream, Audio Conversion Services for converting between PCM and compressed formats (and vice versa), Extended Audio File Services for combining file and conversion Services (e.g., given PCM, write out to a compressed AAC file).

  • You don’t get AVAudioPlayer or AVAudioRecorder on the Mac because you don’t need them: you already have QuickTime, and the QTKit API.
  • The Audio Queue Services Programming Guide is sufficient to get you started with Audio Queues, though it is unfortunate that its code excerpts are not pulled together into a complete, runnable Xcode project.

  • Lucky for you, I wrote one for the Streaming Audio chapter of the Prags’ iPhone book. Feel free to download the book’s example code. But do so quickly — the Streaming Audio chapter will probably go away in the 3.0 rewrite, as AVAudioRecorder obviates the need for most people to go down to the Audio Queue level. We may find some way to repurpose this content, but I’m not sure what form that will take. Also, I think there’s still a bug in the download where it can record with impunity, but can only play back once.

  • The Audio Unit Programming Guide is required reading for using Audio Units, though you have to filter out the stuff related to writing your own AUs with the C++ API and testing their Mac GUIs.

  • Get comfortable with pointers, the address-of operator (&), and maybe even malloc.

  • You are going to fill out a lot of AudioStreamBasicDescription structures. It drives some people a little batty.

  • Always clear out your ASBDs, like this:

    
    memset (&myASBD, 0, sizeof (myASBD))
    

    This zeros out any fields that you haven’t set, which is important if you send an incomplete ASBD to a queue, audio file, or other object to have it filled in.

  • Use the “canonical” format — 16-bit integer PCM — between your audio units. It works, and is far easier than trying to dick around bit-shifting 8.24 fixed point (the other canonical format).

  • Audio Units achieve most of their functionality through setting properties. To set up a software renderer to provide a unit with samples, you don’t call some sort of a setRenderer() method, you set the kAudioUnitProperty_SetRenderCallback property on the unit, providing a AURenderCallbackStruct struct as the property value.

  • Setting a property on an audio unit requires declaring the “scope” that the property applies to. Input scope is audio coming into the AU, output is going out of the unit, and global is for properties that affect the whole unit. So, if you set the stream format property on an AU’s input scope, you’re describing what you will supply to the AU.

  • Audio Units also have “elements”, which may be more usefully thought of as “buses” (at least if you’ve ever used pro audio equipment, or mixing software that borrows its terminology). Think of a mixer unit: it has multiple (perhaps infinitely many) input buses, and one output bus. A splitter unit does the opposite: it takes one input bus and splits it into multiple output buses.

  • Don’t confuse buses with channels (ie, mono, stereo, etc.). Your ASBD describes how many channels you’re working with, and you set the input or output ASBD for a given scope-and-bus pair with the stream description property.

  • Make the RemoteIO unit your friend. This is the AU that talks to both input and output hardware. Its use of buses is atypical and potentially confusing. Enjoy the ASCII art:

    
                             -------------------------
                             | i                   o |
    -- BUS 1 -- from mic --> | n    REMOTE I/O     u | -- BUS 1 -- to app -->
                             | p      AUDIO        t |
    -- BUS 0 -- from app --> | u       UNIT        p | -- BUS 0 -- to speaker -->
                             | t                   u |
                             |                     t |
                             -------------------------
    

    Ergo, the stream properties for this unit are

    Bus 0 Bus 1
    Input Scope: Set ASBD to indicate what you’re providing for play-out Get ASBD to inspect audio format being received from H/W
    Output Scope: Get ASBD to inspect audio format being sent to H/W Set ASBD to indicate what format you want your units to receive
  • That said, setting up the callbacks for providing samples to or getting them from a unit take global scope, as their purpose is implicit from the property names: kAudioOutputUnitProperty_SetInputCallback and kAudioUnitProperty_SetRenderCallback.

  • Michael Tyson wrote a vital blog on recording with RemoteIO that is required reading if you want to set callbacks directly on RemoteIO.

  • Apple’s aurioTouch example also shows off audio input, but is much harder to read because of its ambition (it shows an oscilliscope-type view of the sampled audio, and optionally performs FFT to find common frequencies), and because it is written with Objective-C++, mixing C, C++, and Objective-C idioms.

  • Don’t screw around in a render callback. I had correct code that didn’t work because it also had NSLogs, which were sufficiently expensive that I missed the real-time thread’s deadlines. When I commented out the NSLog, the audio started playing. If you don’t know what’s going on, set a breakpoint and use the debugger.

  • Apple has a convention of providing a “user data” or “client” object to callbacks. You set this object when you setup the callback, and its parameter type for the callback function is void*, which you’ll have to cast back to whatever type your user data object is. If you’re using Cocoa, you can just use a Cocoa object: in simple code, I’ll have a view controller set the user data object as self, then cast back to MyViewController* on the first line of the callback. That’s OK for audio queues, but the overhead of Obj-C message dispatch is fairly high, so with Audio Units, I’ve started using plain C structs.

  • Always set up your audio session stuff. For recording, you must use kAudioSessionCategory_PlayAndRecord and call AudioSessionSetActive(true) to get the mic turned on for you. You should probably also look at the properties to see if audio input is even available: it’s always available on the iPhone, never on the first-gen touch, and may or may not be on the second-gen touch.

  • If you are doing anything more sophisticated than connecting a single callback to RemoteIO, you may want to use an AUGraph to manage your unit connections, rather than setting up everything with properties.

  • When creating AUs directly, you set up a AudioComponentDescription and use the audio component manager to get the AUs. With an AUGraph, you hand the description to AUGraphAddNode to get back the pointer to an AUNode. You can get the Audio Unit wrapped by this node with AUGraphNodeInfo if you need to set some properties on it.

  • Get used to providing pointers as parameters and having them filled in by function calls:

    
    AudioUnit remoteIOUnit;
    setupErr = AUGraphNodeInfo(auGraph, remoteIONode, NULL, &remoteIOUnit);
    

    Notice how the return value is an error code, not the unit you’re looking for, which instead comes back in the fourth parameter. We send the address of the remoteIOUnit local variable, and the function populates it.

  • Also notice the convention for parameter names in Apple’s functions. inSomething is input to the function, outSomething is output, and ioSomething does both. The latter two take pointers, naturally.

  • In an AUGraph, you connect nodes with a simple one-line call:

    
    setupErr = AUGraphConnectNodeInput(auGraph, mixerNode, 0, remoteIONode, 0);
    

    This connects the output of the mixer node’s only bus (0) to the input of RemoteIO’s bus 0, which goes through RemoteIO and out to hardware.

  • AUGraphs make it really easy to work with the mic input: create a RemoteIO node and connect its bus 1 to some other node.

  • RemoteIO does not have a gain or volume property. The mixer unit has volume properties on all input buses and its output bus (0). Therefore, setting the mixer’s output volume property could be a de facto volume control, if it’s the last thing before RemoteIO. And it’s somewhat more appealing than manually multiplying all your samples by a volume factor.

  • The mixer unit adds amplitudes. So if you have two sources that can hit maximum amplitude, and you mix them, you’re definitely going to clip.

  • If you want to do both input and output, note that you can’t have two RemoteIO nodes in a graph. Once you’ve created one, just make multiple connections with it. The same node will be at the front and end of the graph in your mental model or on your diagram, but it’s OK, because the captured audio comes in on bus 1, and some point, you’ll connect that to a different bus (maybe as you pass through a mixer unit), eventually getting the audio to RemoteIO’s bus 0 input, which will go out to headphones or speakers on bus 0.

I didn’t come up with much (any?) of this myself. It’s all about good references. Here’s what you should add to your bookmarks (or Together, where I throw any Core Audio pages I find useful):

Dance Dance Fail

So, to explain this morning’s angry tweet.

My ASD 6-year-old son has a few obsessions, one of which is the Dance Dance Revolution series of video games. In the car, the DDR soundtracks are pretty much the only thing he wants to listen to, and he’s reasonably competent at the Beginner and even Basic skill levels when he wants to clear them (he’ll sometimes fail songs on purpose too, which is really not a lot of fun for me when he’s at the arcade and spending real money).

Last month, he was trying to play Dance Dance Revolution Extreme and I could hear him screaming. I went to the PS2 and saw he was getting the message “System Data is corrupt”. In other words, the settings and progress on the memory card couldn’t be read, and all our unlocks had been lost. It took a long time to talk him off the ledge and get him to switch to another game. Over the course of the next two weeks, I played Extreme every morning and many nights to earn back all the unlocks… all the more annoying because the US version of Extreme is easily the worst of the series: terrible UI, terrible music, and a useless workout mode (strangely, the Japanese version, which we have and could play on our old “fat” PS2, was one of the best, meant as a possible “last hurrah” for the series).

I bought a second memory card and copied over the data from all our essential games to it, including all the other DDRs we own, which is every DDR released for PS2.

So wouldn’t you know it, this morning he goes to play Dance Dance Revolution Supernova and gets the “System data corrupted” message again. OK, calm down, I say… I’ll just copy over the data from the backup card. Except that doesn’t work. So I move the backup card over to slot 1… and that doesn’t work.

OK, WTF? Since the timestamp on the file is from three weeks ago, I’m looking at two unlikely scenarios: either I managed to back up the file right after it became corrupted (not knowing it was corrupted, since we would have found out before then) and he hasn’t played it since then, or the hardware is failing to read from and/or write to the memory cards consistently.

What can I do? I pick option #2 and buy a new PS2 this morning. Thank goodness they’ve dropped to $100.

Except this doesn’t work either, so I’ve bought a PS2 I don’t need. Guess the old one goes up to my parents’ place at Torch Lake.

The lucky thing is that by digging through memory cards, I found a DDR Supernova save from two years ago that seems to have most of our unlocks, so I copied that over to our main memory card and it works. So at least I’ve defused Keagan for now.

But seriously, what the hell? Two file corruptions in a month? From the same series of games?

So here’s something to chew on. I searched ddrfreak.com for “corruption” and found lots of threads with other users complaining about data corruption. Many of the others on the boards lectured the posters about the usual thing: don’t turn off your power or remove a card when saving… the kind of patronizing BS you’d expect from tech support.

But it doesn’t wash for me. I’ve owned PlayStations since 1997 and I’ve never had data loss except for these two games. And here’s something else. I searched the forums of two Final Fantasy sites (Eyes on FF and Final Fantasy Forums), to look for threads about data corruption. After all, if the rates of hardware failure or user incompetence is consistent, then we should see many more complaints on the FF boards, as that series is far more popular than DDR. And yet, there seem to be no complaints of lost memory card data on those boards.

So maybe it’s time to stop assuming that the corruption in this case is media failure. What if the problem is that DDR gets into a state in which it writes data that it can’t read? What if it corrupts its own data? As far as I can tell, this hypothesis is most consistent with the evidence.

And, as you might expect, it pisses me off. It might sound like a reckless boast for a developer to make, but I think that software should never lose user data. In 2009, with automated backups, redundancy, and simple common sense, there’s just no damn excuse for it. Software that’s known to inadvertently destroy user data should be pulled from the market, rated F by reviewers, and deleted en masse from hard drives and download servers until such a time as the people behind it can get their act together.

It’s quietly acknowledged among developers that software engineering does not aspire to the level of dependability and quality of other engineering disciplines. We think it’s too hard. This despite the fact that we galavant in a fantasy world of total unreality, while other engineers have to deal with real physics, real chemistry, real biology. Our standards, practiced in other fields, would be unconscionably negligent, if not criminal.

And yet, somehow we get a pass.

I don’t get it. And yet, I’ll probably buy the next DDR for Keagan when it comes out. Even though the series, and seemingly only this series, has proven its inability to take care of my data. And even though the manufacturer, Konami, can’t even get physical media together — a chunk broke off our Supernova 2 disc while putting it back in the case (center ring near ESRB rating logo):

broken-supernova-2-dvd

…and even though we were in the 90-day warranty period, Konami refused to replace it. It’s pretty amazing to see a company with such contempt for customers (well, outside of the US airline industry, anyways) but there you have it.

Still pissed, but I think I’ve said enough about this.

Testing WP-Poll

Just installed WP-Poll, and I’m now testing it out.
[poll id=”2″]

Doesn’t seem to entirely agree with my theme (extra bullet points next to the radio buttons is unhelpful), but it’ll do.

Update: After a few hours, it seems this plug spins a “Wait” cursor and does little else. Does not seem to process votes or show results. Fail.

Later still (5/15/09): It’s crap. Deactivating the plug-in.

Again with The Six Steps

About 10 minutes into Java Posse 241, an unconference session on design, Joe Nuxoll almost pulls the usual “developers talking design” chat into a breathtakingly new perspective. Really close. Picking up on an anecdote about 37signals’ creation of Rails as a tool for what they wanted to do, he points out the idea of going back to first principles:

  • What do I want to accomplish?
  • What can I do that will accomplish that?
  • How do I do that?

He points out that not everything has to be a webapp or a library; that there could be a human process that’s more practical (cheaper, effective, etc.) than building something electronic.

And then the conversation goes another direction. But this so reminded me of Scott McCloud’s exceptional metaphor and discussion of “The Six Steps” of the creative process, in Understanding Comics. McCloud presents a linear progression of choices and concerns:

  1. Idea/purpose – What am I trying to accomplish?
  2. Form – What form will my effort take (a comic book, a website, etc.)
  3. Idiom/genre – How do I address the recipient (fiction vs. nonfiction, online community vs. news site). McCloud says this is “the ‘school’ of art, the vocabulary of styles or gestuers or subject matter.”
  4. Structure – How the work is arranged and composed, “what to include, what to leave out” (the latter being a far more important choice than is often realized).
  5. Craft – The actual work of getting the thing done: problem solving, use of skills, etc.
  6. Surface – The immediately-perceivable traits, like polish or production values.

What’s perhaps most breathtaking the first time you read Understanding Comics is McCloud’s narrative which portrays the reality that almost nobody starts with step 1 and proceeds to step 6. In fact, it’s far more common to go backwards: to see just the surface gloss of something and try to mimic that, with no understanding at all of the decisions that inform the rest of the work, and how they depend on each other.

In our realm, this pathology is highly obvious. If McCloud’s version is a kid tracing Wolverine onto lined notebook paper and declaring “I can draw as well as a professional!”, then surely our equivalent is the UI “skin”, like the hackery that can make a Windows or Linux desktop look like Mac OS X, but can’t change the fact that everything below the surface is built up with the respective ideas of those operating systems.

This is why I’m convinced Linux can never succeed as a desktop OS. When I used GNOME as my desktop for a year back in 2001, I commonly complained that the desktop gave me at least a dozen places to set my visual theme, but setting the damned clock still required me to jump into xterm and do sudo date -s ... I haven’t used it since then, but I wonder if they’ve even gotten inter-application copy-and-paste working (to say nothing of drag-and-drop). McCloud’s narrative shows genuine artists eventually working all the way back to step 1 or 2, asking “why am I doing this”, and proceeding forward, making informed and purposeful decisions about idiom, structure, and craft. It’s hard to imagine the Linux community having the wherewithal and discipline to see through such a process, when they’ve proven themselves willing to fork their code at the drop of a hat (or, more accurately, the outbreak of a Kirk-versus-Picard flamewar). The result is something that’s so baroque and self-contradictory it isn’t even necessarily ideal for hackers, and there’s little hope of this community ever deciding to build something their moms can use.

A lot of projects make bad choices of “form”, and doom themselves from the start. In the 90’s, there seemed to be people who believed that every activity worth undertaking should be done in the form of a startup company. As it turned out, fairly few endeavors are well-suited to that approach.

Today, we see people falling over themselves to make everything into an iPhone app, even when it’s not appropriate. If most of the value of your project comes from data provided via servers you own or operate, it’s likely that a webapp is more appropriate than a genuine iPhone app. Clever use of iPhone CSS can produce webapps that behave much like native apps (the Facebook webapp is still as good as its App Store equivalent), can be updated for all users at any time, and can be adapted to other devices without starting over at square one (compare to the challenge of rewriting your iPhone app for Android, Blackberry, Windows Mobile, etc.).

If anything, many of the great iPhone apps are heralding a resurgence of the “productivity app”, which Broderbund founder Doug Carlston defined to Tim O’Reilly as “any application where the user’s own data matters more to him than the data we provide.” Any iPhone app worth writing is going to involve some significant user data, whether that’s classic user data like their mail, their files, audio they’ve recorded… but also data as simple as where the user is when they run the app. After all, the hundreds of thousands of records in the the restaurant-finder app’s database is useless to me; what matters is finding what I like that’s close to where I am. In other words, the data has no value until it gets those crucial inputs: where I am and what I want to eat.

Now here’s an interesting mental exercise: where would what we consider to be “design” fall in McCloud’s six steps? At first, I made a cursory decision that it was just part of step 5 (craft), as it was part of the doing of the work. That’s clearly wrong, as what we think of as design (and I’m mentally considering varying creative exercises I understand, like book writing, screenwriting, TV production, podcasting, software development, etc.) clearly encompasses the step 4 (structure) decisions of how to arrange the content. And it probably touches on step 6 too: deciding whether to use the brushed metal or plain UI look, whether to shoot in SD, HD, or on film, is a design decision as well. But would we consider step 3 (genre/idiom) to be part of design? I think not. I think that’s a decision that precedes design, one that informs just what it is we’re going to be designing (a educational game versus an electronic series of lessons, historical fiction versus documentary, etc.).

Still, I think it’s important not to make too early, too easy assumptions about the major decisions at the front of the process. “Why am I doing this” is the most important question to ask yourself: it shouldn’t be skipped.

With great power comes… not a lot of guidance

Haven’t blogged a lot here about technical matters since leaving java.net and going entirely indie. Right now, it’s all about the writing: I’m working on another Core Audio article, to be immediately followed by an intense push to rework the Prags’ iPhone book, such that it will launch as an iPhone SDK 3.0 book. Rather than just tacking on a “new in 3.0” chapter, we’re trying to make sure that the entire book is geared to the 3.0 developer. How serious are we about this? Well, as a first step, I’m throwing out an entire chapter, the one that was hardest to write, and much of another, because 3.0 provides simpler alternatives for the majority of developers. Don’t worry, there’s a whole new chapter coming on that topic, and another on a closely related topic… ugh, this is going to make a lot more sense when the NDA drops and I can point to APIs directly.

There’s a second book coming, approved by the prags but currently not more than an outline and some good vibes. I think I’ll be doing my first editorial meeting about it with Daniel later today. More on that when the time comes, but here’s a teasing description: it’s an iPhone book that’s widely needed, though few people realize it yet.

Time permitting, I’d like to get some real work done on a podcast studio app for iPhone, something I mentioned back on that Late Night Cocoa interview last year. There’s one feature in 3.0 that could make this a jaw-dropper, but it’s going to take a lot of experimentation (and, quite probably, the purchase of one or more new devices).

Right now, though, I’m really stuck in the mire of Core Audio, for the second of a series of articles on the topic. I’d been racing along just fine, converting an audio synthesis example from the first article into a version that uses the AUGraph and MultiChannelMixer, but now I’m suffering trying to incorporate input from the microphone. Using analogies to the Mac version of CoreAudio, the CAPlayThrough example suggests that samples must be manually copied from an input callback and buffered until a render callback requests them, but a tantalizing hint from Bill made me think that such low-level drudgery was unnecessary. Unfortunately, I cannot make it work, and after several days, I am now reverting to moving the samples between render callbacks myself.

Core Audio is hard. Jens Alfke nailed it in a reply on coreaudio-api:

“Easy” and “CoreAudio” can’t be used in the same sentence. šŸ˜› CoreAudio is very powerful, very complex, and under-documented. Be prepared for a steep learning curve, APIs with millions of tiny little pieces, and puzzling things out from sample code rather than reading high-level documentation.

The fact that it’s underdocumented is, of course, why there are opportunities for people like me to write about it. Drudgery and dozens of failed experiments aside, that is what makes it interesting: it’s a fascinating problem domain, with powerful tools that do awesome stuff, if you can just get everything lined up just right.

One of the things that makes it hard to learn is that, like a lot of Apple’s C APIs, it doesn’t make its functionality obvious through function names. To keep things flexible and expandable, much of the functionality comes from components that you create dynamically (to wit, you say “give me a component of type output, subtype RemoteIO, made by Apple” to get the AudioUnit that abstracts the audio hardware). These largely untyped components don’t have feature specific function names, and accomplish a lot of their functionality by getting and setting properties. As a result, the API documentation doesn’t really tell you much of what’s possible: all you see is functions for discovering components and setting their properties, not how these combine to process audio. The real secrets, and the potential, is only revealed by working through what documentation is available (much of it for the Mac, which is about 90% applicable for iPhone), and looking for guidance from sample code. Oh, and Googling… though sometimes you find articles that are more wrong than right.

Core Audio is what was on my mind while writing my penultimate java.net blog about insularity. Despite the struggles of Core Audio, it’s intellectually fascinating and rewarding (like QuickTime was for me a few years prior), and that’s in sharp contrast to the Java pathologies of reinventing the same things over and over again (holy crap, how many app server projects do we need?), and getting off on syntactic sugar (closures, scripting languages on the VM), all of which increasingly seem to me like an intellectual circle jerk. What’s fundamentally new that you can do in Java? For a media guy like me, there’s nothing. JavaFX’s media API is far shallower than the abandoned Java Media Framework, and you have to learn a new language to use it. Just so you can use an API where the only method in the Track class is name.

By comparison the iPhone OS 3.0 Sneak Peek presentation announced not only a peer-to-peer voice chat API, but indicated that the Audio Unit used by that feature will be usable directly. In other words, it’s theoretically possible to incorporate the voice chat system with all the other powerful functionalities Core Audio provides. The potential is enormous.

And the only problem is figuring out how to connect AUs in a way that doesn’t throw back one of the various bad -10xxx response codes that seem virtually arbitrary at times (but often do make sense later if you manage work through them).

And with that, I’m back to figuring out what’s wrong with my code to pass the AudioBufferList from RemoteIO’s input callback to the MultiChannelMixer’s render callback.

She’s Got Issues

I saw one of these Microsoft “PCs are cheaper” ads during the basketball game last night. It’s probably best of me to leave the advocacy to those who are good at it (e.g., Daring Fireball), but even setting aside tiresome evangelism, this campaign still seems like an odd duck:

  • One of the classic rules of advertising is that #2 trashes #1, but never vice versa. Avis says “we try harder” to catch Hertz, but Hertz never even acknowledges Avis. As the market leader for decades, it doesn’t have to. So why does Microsoft, still enjoying at least a 10-to-1 advantage over Mac in market share, feel the need to take potshots?
  • And did you notice the fallacy with that point? It’s that Microsoft isn’t even advertising its own product, which is the operating system. They’re forced into telling you how great PC hardware in general is, not why Windows is great. I suppose the Linux community could expect a free ride off this campaign, if it works, because it too benefits from a “buy a cheap PC” message.
  • The big question is, how much does price and feature set matter? If it’s the only thing that matters, then the iPod never had a chance against the Zen Nomad.
  • That said, there is a perception that Macs are more expensive, largely driven by the fact that Apple doesn’t even bother making zero-margin el cheapo computers. Saying that you’re “paying $500 for a logo” is rubbish, but I think some people will buy it.
  • But is it really just about styling? The ads seem to make the point that Macs are “sexy” – are they admitting that most PCs are ugly? – but I don’t know how many Mac users really pay that much heed to appearance. If it’s just about the sexy, then why would people try so hard to get OS X running on admittedly ugly-ass PCs?

Finally, couldn’t Microsoft use this exact same line of reasoning in selling the XBox 360 against the PlayStation 3? The cheapest PS3 is double the price of the cheapest 360, yet Microsoft hesitates to do so, even though the 360 is something they actually make and sell (as opposed to PCs, which they do not).

Maybe the difference is that – for this console generation and in North America at least – they know they have Sony beat. But can’t we say the same for desktop operating systems? I mean come on, it’s still 10-to-1 right? Maybe, but there’s a sense that a lot of innovators have switched to Mac, as Fortune noted in a recent article about Boxee. If cool new stuff is all on the web, is multi-platform, or (heaven forbid) is Mac first, then Microsoft’s classic advantages are lost.

But if that’s the case, then unless “Lauren” from the ads is a developer – oops, wait, she’s an actress – then it’s hard to see how selling her a cheap-ass laptop does much for Microsoft.