Rss

Archives for : career

prepareForSegue()

tl;dr: I’m starting a full-time job today doing iOS development at rev.com. I’m not moving out to California. I’ll still be talking at conferences and possibly doing more books.

Continue Reading >>

Any Port in a Storm

There goes another one.

That’s @jonathanpenn, as he heads off to Apple. He follows a number of top indie developers/author/speakers to head to the mothership in the last few months, including Patrick Burleson, Kevin Hoctor, and if we’ll go back a little over a year, we can throw in my former iOS SDK Development co-author Bill Dudney.

This is causing a little bit of angst among those of us who hate to see our friends decamp from our communities to California, and to suggest that maybe indie iOS development/writing/speaking isn’t tenable. Janie Clayton-Hasz, whom I’m working with on a soon-to-be-announced project, expresses this from the POV of a newcomer to development life in her latest blog.

Continue Reading >>

Taking C Seriously

Dennis Ritchie, a co-creator of Unix and C, passed away a few weeks ago, and was honored with many online tributes this weekend for a Dennis Ritchie Day advocated by Tim O’Reilly.

It should hardly be necessary to state the importance of Ritchie’s work. C is the #2 language in use today according to the TIOBE rankings (which, while criticized in some quarters, are at least the best system we currently have for gauging such things). In fact, TIOBE’s preface to the October 2011 rankings predicted that a slow but consistent decline in Java will likely make C the #1 language when this month’s rankings come out.

Keep in mind that C was developed between 1969 and 1973, making it nearly 40 years old. I make this point often, but I can’t help saying it again, when Paul Graham considered the possible traits of The Hundred-Year Language, the one we might be using 100 years from now, he overlooked the fact that C had already made an exceptionally good start on a century-long reign.

And yet, despite being so widely used and so important, C is widely disparaged. It is easy, and popular, and eminently tolerated, to bitch and complain about C’s primitiveness.

I’ve already had my say about this, in the PragPub article Punk Rock Languages, in which I praised C’s lack of artifice and abstraction, its directness, and its ruthlessness. I shouldn’t repeat the major points of that article — buried as they are under a somewhat affected style — so instead, let me get personal.

As an 80’s kid, my first languages were various flavors of BASIC for whatever computers the school had: first Commodore PETs, later Apple IIs. Then came Pascal for the AP CS class, as well as a variety of languages that were part of the ACSL contests (including LISP, which reminds me I should offer due respect to the recent passing of its renowned creator, John McCarthy). I had a TI-99 computer at home (hey, it’s what was on sale at K-Mart) and its BASIC was godawful slow, so I ended up learning assembly for that platform, just so I could write programs that I could stand to run.

C was the language of second-year Computer Science at Stanford, and I would come back to it throughout college for various classes (along with LISP and a ruinous misadventure in Prolog), and some Summer jobs. The funny thing is that at the time, C was considered a high-level language. At that time, abstracting away the CPU was sufficient to count as “high-level”; granted, at the time we also drew a distinction between “assembly language” and “machine language”, presumably because there was still someone somewhere without an assembler and was thus forced to provide the actual opcodes. Today, C is considered a low-level language. In my CodeMash 2010 talk on C, I postulated that a high-level language is now expected to abstract away not only the CPU, but memory as well. In Beyond Java, Bruce Tate predicted we’d never see another mainstream language that doesn’t run in a VM and offer the usual benefits of that environment, like memory protection and garbage collection, and I suspect he’s right.

But does malloc() make C “primitive”? I sure didn’t think so in 1986. In fact, it did a lot more than the languages at the time. Dynamic memory allocation was not actually common at that time — all the flavors of BASIC of that time have stack variables only, no heap. To have, say, a variable number of enemies in your BASIC game, you probably needed to do something like creating arrays to some maximum size, and use some or all of those arrays. And of course relative to assembly language, where you’re directly exposed to the CPU and RAM, C’s abstractions are profound. If you haven’t had that experience, you don’t appreciate that a = b + c involves loading b and c into CPU registers, invoking an “add” opcode, and then copying the result from a register out to memory. One line of C, many lines of assembly.

There is a great blog from a few years ago assessing the Speed, Size, and Dependability of Programming Languages. It represents the relationship between code size and performance as a 2-D plot, where an ideal language has high performance with little code, and an obsolete language demands lots of work and is still slow. These two factors are a classic trade-off, and the other two quadrants are named after the traditional categorization: slow but expressive languages are “script”, fast but wordy are “system”. Go look up gcc – it’s clearly the fastest, but its wordiness is really not that bad.

Perhaps the reason C has stuck around so long is that its bang for the buck really is historically remarkable, and unlikely to be duplicated. For all the advantages over assembly, it maintains furious performance, and the abstractions then built atop C (with the arguable exception of Java, whose primary sin is being a memory pig) sacrifice performance for expressiveness. We’ve always known this of course, but it takes a certain level of intellecutual honesty to really acknowledge how many extra CPU cycles we burn by writing code in something like Ruby or Scala. If I’m going to run that slow, I think I’d at least want to get out of curly-brace / function-call hell and adopt a different style of thinking, like LISP.

I was away from C for many years… after college, I went on a different path and wrote for a living, not coming back to programming until the late 90’s. At that point, I learned Java, building on my knowledge of C and other programming languages. But it wasn’t until the mid-2000’s that I revisted C, when I tired of the dead-end that was Java media and tried writing some JNI calls to QuickTime and QTKit (the lloyd and keaton projects). I never got very far with these, as my C was dreadfully rusty, and furthermore I didn’t understand the conventions of Apple’s C-based frameworks, such as QuickTime and Core Foundation.

It’s only in immersing myself in iOS and Mac since 2008 that I’ve really gotten good with calling C in anger again, because on these platforms, C is a first-class language. At the lower levels — including any framework with “Core” in its name — C is the only language.

And at the Core level, I’m sometimes glad to only have C. For doing something like signal processing in a Core Audio callback, handing me a void* is just fine. In the higher level media frameworks, we have to pass around samples and frame buffers and such as full-blown objects, and sometimes it feels heavier than it needs to. If you’re a Java/Swing programmer, have you ever had to deal with a big heavy BufferedImage and had to go look through the Raster object or whatever and do some conversions or lookups, when what you really want is to just get at the damn pixels already? Seems to happen a lot with media APIs written in high-level languages. I’m still not convinced that Apple’s AV Foundation is going to work out, and I gag at having to look through the docs for three different classes with 50-character names when I know I could do everything I want with QuickTime’s old GetMediaNextInterestingTime() if only it were still available to me.

C is underappreciated as an application programming language. Granted, there’s definitely a knack to writing C effectively, but it’s not just the language. Actually, it’s more the idioms of the various C libraries out there. OpenGL code is quite unlike Core Graphics / Quartz, just like OpenAL is unlike Core Audio. And that’s to say nothing of the classic BSD and other open-source libraries, some of which I still can’t crack. Much as I loathe NSXMLParser, my attempt to switch to libxml for the sake of a no-fuss DOM tree ended after about an hour. So maybe it’s always going to be a learning process.

But honestly, I don’t mind learning. In fact, it’s why I like this field. And the fact that a 40-year old language can still be so clever, so austere and elegant, and so damn fast, is something to be celebrated and appreciated.

So, thanks Dennis Ritchie. All these years later, I’m still enjoying the hell out of C.

When the client’s happy…

Brief note about the project I worked on late last year, and in dribs and drabs since then. The XanEdu iPad app for reading the company’s digital course packs picked up a major award, selected as one of 10 Campus Technology 2011 Innovators Awards.

It’s only useful to (and usable by) college and grad school students whose schools use XanEdu as a vendor, so it’s never going to make the iTunes charts, so it’s nice to see it’s going over well. Mostly I worked on the reader functionality, including the profoundly tricky highlighting feature seen here:

[youtube=http://www.youtube.com/watch?v=VgT-HYu8bWs]

If you wondered why I was complaining bitterly about JavaScript and DOM a few months back, now you know. In fact, our requirements for this feature were harder than the similar iBooks highlighting, since we have to tolerate overlapping highlights (since highlight and comment areas can be shared between users).

Life Beyond The Browser

“Client is looking for someone who has developed min. of 1 iPhone/iPad app.  It must be in the App Store no exceptions.  If the iPhone app is a game, the client is not interested in seeing them.” OK, whatever… I’ll accept that a game isn’t necessarily a useful prerequisite. But then this e-mail went on: “The client is also not interested in someone who comes from a web background or any other unrelated background and decided to start developing iPhone Apps.”

Wow. OK, what the hell happened there? Surely there’s a story behind that, one that probably involved screaming, tears, and a fair amount of wasted money. But that’s not what I’m interested in today.

What surprised me about this was the open contempt for web developers, at least those who have tried switching over to iOS development. While I don’t think we’re going to see many of these “web developer need not apply” posts, I’m still amazed to have seen one at all.

Because really, for the last ten years or so, it’s all been about the web. Most of the technological innovation in the last decade arrived in the confines of the browser window, and we have been promised a number of times that everything would eventually move onto the web (or, in a recent twist, into the cloud).

But this hasn’t fully panned out, has it? iOS has been a strong pull in the other direction, and not because Apple wanted it that way. When the iPhone was introduced and the development community given webapps as the only third-party development platform, the community reaction was to jailbreak the device and reverse-engineer iPhone 1.0’s APIs.

And as people have come over, they’ve discovered that things are different here. While the 90’s saw many desktop developers move to the web, the 10’s are seeing a significant reverse migration. In the forums for our iPhone book, Bill and I found the most consistently flustered readers were the transplanted web developers (and to a lesser degree, the Flash designers and developers).

Part of this was language issues. Like all the early iPhone books, we had the “we assume you have some exposure to a C-based curly-brace language” proviso in the front. Unfailingly, what tripped people up was the lurking pointer issues that Objective-C makes no attempt to hide. EXC_BAD_ACCESS is exactly what it says it is: an attempt to access a location in memory you have no right to touch, and almost always the result of following a busted pointer (which in turn often comes from an object over-release). But if you don’t know what a pointer is, this might as well be in Greek.

And let’s think about languages for a minute. There has been a lot of innovation around the web programming languages. Ruby and Python have (mercifully) replaced Perl and PHP in a lot of the conventional wisdom about web programming languages, while the Java Virtual Machine provides a hothouse for new language experimentation, with Clojure and Scala picking gaining some very passionate adherents.

And yet, none of these seem to have penetrated desktop or device programming to any significant degree. If the code is user-local, then it’s almost certainly running in some curly-brace language that’s not far from C. On iOS, Obj-C/C/C++ is the only provided and only practical choice. On Mac, Ruby and Python bindings to Cocoa were provided in Leopard, but templates for projects using these languages no longer appear in XCode’s “New Project” dialog in Snow Leopard. And while I don’t know Windows, it does seem like Visual Basic has finally died off, replaced by C#, which seems like C++ with the pointers taken out (i.e., Java with a somewhat different syntax).

So what’s the difference? It seems to me like the kinds of tasks relevant to each kind of programming is more different than is generally acknowledged. In 2005’s Beyond Java, Bruce Tate argued that a primary task of web development was mostly about doing the same thing over and over again: connecting a database to a web page. You can snip at specifics, but he’s got a point: you say “putting an item in the user’s cart”, I say “writing a row to the orders table”.

If you buy this, then you can see how web developers would flock to new languages that make their common tasks easier — iterating over collections of fairly rich objects in novel and interesting ways has lots of payoff for parsing tree structures, order histories, object dependencies and so on.

But how much do these techniques help you set up a 3D scene graph, or perform signal processing on audio data captured from the mic? The things that make Scala and Ruby so pleasant for web developers may not make much of a difference in an iOS development scenario.

The opposite is also true, of course. I’m thrilled by the appearance of the Accelerate framework in iOS 4, and Core MIDI in 4.2… but if I were writing a webapp, a hardware-accelerated Fast Fourier Transform function likely wouldn’t do me a lot of good.

I’m surprised how much math I do when I’m programming for the device. And not just for signal processing. Road Tip involved an insane amount of trigonometry, as do a lot of excursions into Core Animation.

The different needs of the different platforms create different programmers. Here’s a simple test: which have you used more in the last year: regular expressions, or trigenometry? If it’s the former, you’re probably a web developer; the latter, device or desktop. (If you’ve used neither, you’re a newbie, and if you’ve used both, then you’re doing something cool that I probably would like to know about).

Computer Science started as a branch of mathematics… that’s the whole “compute” part of it after all. But times change; a CS grad today may well never need to use a natural logarithm in his or her work. Somebody — possibly Brenda Laurel in Computers as Theatre (though I couldn’t find it in there) — noted that the French word for computer, ordinateur, is a more accurate name today, being derived from root word for “organize” rather than “compute”.

Another point I’d like to make about webapps is that they’ve sort of dominated thinking about our field for the last few years. The kind of people you see writing for O’Reilly Radar are almost always thinking from a network point of view, and you see a lot of people take the position that devices are useful only as a means of getting to the network. Steve Ballmer said this a year ago:

Let’s face it, the Internet was designed for the PC. The Internet is not designed for the iPhone. That’s why they’ve got 75,000 applications — they’re all trying to make the Internet look decent on the iPhone.

Obviously I disagree, but I bring it up not for easy potshots but to bolster my claim that there’s a lot of thinking out there that it’s all about the network, and only about the network.

And when you consider a speaker’s biases regarding the network versus devices operating independently, you can notice some other interesting biases. To wit: I’ve noticed enthusiasm for open-source software is significantly correlated with working on webapps. The most passionate OSS advocates I know — the ones who literally say that all software that matters will and must eventually go open-source (yes, I once sat next to someone who said exactly that) — are webapp developers. Device and desktop developers tend to have more of nuanced view of OSS… for me, it’s a mix of “I can take it or leave it”, and “what have you done for me lately?” And for non-programmers, OSS is more or less irrelevant, which is probably a bad sign, since OSS’ arrival was heralded by big talk of transparency and quality (because so many eyes would be on the code), yet there’s no sense that end-users go out of their way to use OSS for any of these reasons, meaning they either don’t matter or aren’t true.

It makes sense that webapp developers would be eager to embrace OSS: it’s not their ox that’s being gored. Since webapps generally provide a service, not a product, it’s convenient to use OSS to deliver that service. Webapp developers can loudly proclaim the merits of giving away your stuff for free, because they’re not put in the position of having to do so. It’s not like you can go to code.google.com and check out the source to AdWords, since no license used by Google requires them to make it available. Desktop and device developers may well be less sanguine about the prospect, as they generally deliver a software product, not a service, and thus don’t generally have a straightforward means of reconciling open source and getting paid for their work. Some of the OSS advocates draw on webapp-ish counter-arguments — “sell ads!”, “sell t-shirts!”, “monetize your reputation” (whatever the hell that means) — but it’s hard to see a strategy that really works. Java creator James Gosling nails it:

One of the key pieces of the linux ideology that has been a huge part of the problem is the focus on “free”. In extreme corners of the community, software developers are supposed to be feeding themselves by doing day jobs, and writing software at night. Often, employers sponsor open-source work, but it’s not enough and sometimes has a conflict-of-interest. In the enterprise world, there is an economic model: service and support. On the desktop side, there is no similar economic model: desktop software is a labor of love.

A lot of the true believers disagree with him in the comments. Then again, in searching the 51 followups, I don’t see any of the gainsayers beginning their post with “I am a desktop developer, and…”

So I think it’s going to be interesting to see how consensus and common wisdom industry changes in the next few years, as more developers move completely out of webapps and onto the device, the desktop, and whatever we’re going to call the things in between (like the iPad). That the open source zealots need to take a hint about their precarious relevance is only the tip of the iceberg. There’s lots more in play now.

Adventures in Qualitude

Current project is in a crunch and needs to be submitted to Apple today. This means that my iDevBlogADay for this week will be short. It also means that I’ve been testing a lot of code, and working through bug reports for the last week or so. And there’s a lot to be said about good and bad QA practices.

In the iOS world, we tend to see small developers, and a lot of work-for-hire directly for clients, as opposed to some of the more traditional models of large engineering staffs. As a result, you may not have a real QA team. This can be a mixed blessing. Right now, I’m getting bug reports from the client, who have an acute awareness of how the application will be used in real life. This is a huge advantage in that all too often, professional testers are hired late in the development cycle and come in when things are nearly done and ready to test. The problem is that if the nature of the application is non-intuitive, the testers will need time to develop a grasp of the business problem the app solves.

What they do in the meantime, inevitably, is to pound on the user interface. Along with a lot of arguments about aesthetics, you tend to get lots of bug reports that start with the phrase “If I click the buttons really fast…” These bugs are rarely 100% reproducible, but they are 100% annoying. Here’s the thing: I’m not happy if, say, you can pound really fast on my app and get it in some visually inconsistent state. That’s not right. But if you’re not using the application in a gratuitously non-useful way, does ending up in a non-useful state really matter?

The real sin of this kind of testing approach is that the important bugs — whether the business problem is really solved, and how well — don’t get discovered until much later. I had one project where the testers were happy to repeatedly re-open a bug involving a progress bar that incremented in stages — because it was waiting on indeterminately long responses from a web service — rather than moving at a consistent rate. In the meantime, they missed catastrophic bugs like a metadata update that, if sent to clients, would delete all their files in our application.

It sounds like I’m ripping on QA, and there’s often an adversarial relationship between testers and developers. But that’s not necessarily the way things should be. Good QA is one of the best things a developer can hope for. People who understand the purpose of your code, and can communicate where the code does and doesn’t achieve its goals, are actually pretty rare. In one of the best QA relationships I’ve ever been in, we actually had QA enforcing the development process and doing the build engineering (make files, source control, etc.). This is actually a huge relief, because it lets the engineers concentrate on code.

One thing I’ve learned from startups is that they tend to hire salespeople too early, and QA too late. It’s easy to understand why you’d do that — you want to get revenue as early as possible, whereas you wouldn’t seem to need QA until the product is almost done. But it turns out this is backwards — you don’t have anything to sell until later (and for some kinds of startups, salespeople can’t get their foot in the door and the management team does all the selling anyways). And a well-informed QA staff, working from the beginning alongside the programmers, gives you a better chance of having something worth shipping.

Of course, like I said in the beginning, a lot of us in iOS land don’t even have a QA staff. Still, some of the same rules apply: your clients are likely your de facto testers, and provided they understand what a work-in-progress is like, they can get you quality feedback early.

It’s like “Glee” with coding instead of singing

Like a lot of old programmers — “when I was your age, we used teletypes, and line numbers, and couldn’t rely on the backspace key” and so on — I sometimes wonder how different it is growing up as a young computer programmer today. Back in the 80’s we had BBSs, but no public internet… a smattering of computer books, but no O’Reilly… and computer science as an academic discipline, but further removed from what you’d actually do with what you’d learned.

Developers my age grew up on some kind of included programming environment. Prior to the Mac, every computer came with some kind of BASIC, none of which had much to do with each other beyond PRINT, GOTO, and maybe GOSUB. After about the mid-80’s, programming became more specialized, and “real” developers would get software development kits to write “real” applications, usually in some variant of C or another curly-brace language (C++, C#, Java, etc.).

But it’s not like most people start with the formal tools and the hard stuff, right? In the 80’s and 90’s, there were clearly a lot of young people who picked up programming by way of HyperCard and other scripting environments. But those have largely disappeared too.

So what do young people use? When I was editing for O’Reilly’s ONJava website, our annual poll of readers revealed that our under-18 readership was effectively zero, which meant that young people either weren’t reading our site, or weren’t programming in Java. There has to be some Java programming going on at that age — it is the language for the Advanced Placement curriculum in American high schools, after all — but there’s not a lot of other evidence of widespread Java coding by the pre-collegiate set.

I’ve long assumed that where young people really get their start today is in the most interesting and most complete programming environment provided on every desktop computer: the web browser. I don’t want to come off like a JavaScript fanboy — my feelings about it are deeply mixed — but the fact remains that it is freely and widely available, and delivers interesting results quickly. Whereas 80’s kids would write little graphics programs in Applesoft BASIC or the obligatory 10 PRINT "CHRIS IS GREAT" 20 GOTO 10, these same kinds of early programming experiences are probably now being performed with the <canvas> tag and Document.write(), respectively. In fact, the formal division of DOM, CSS, and JavaScript may lead the young programmer to a model-view-controller mindset a lot sooner than was practical in your local flavor of BASIC.

The other difference today is that developers are much better connected, thanks to the internet. We didn’t used to have that, so the programmers you knew were generally the ones you went to school with. I was lucky in this respect in that the guys in the class above me were a) super smart, and b) very willing to share. So, 25 years later, this will have to do as a belated thank you to Jeff Dauber, Dean Drako, Drew Shell, Ed Anderson, Jeff Sorenson, and the rest of the team.

Did I say “team”? Yeah, this is the other thing we used to do. We had a formal computer club as an activity, and we participated in two forms of programming contests. The first is the American Computer Science League — which I’m releived to see still exists — which coordinated a nation-wide high school computer science discovery and competition program, based on written exams and proctored programming contests. The cirriculum has surely changed, but at least in the 80’s, it was heavily math-based, and required us to learn non-obvious topics like LISP programming and hexadecimal arithmetic, both of which served me well later on.

Our school also participated in a monthly series of programming contests with other schools in the suburban Detroit area. Basically it worked like this: each team would bring one Apple II and four team members and be assigned to a classroom. At the start of the competition, each team would be given 2-4 programming assignments, with some sample data and correct output. We’d then be on the clock to figure out the problems and write up programs, which would then be submitted on floppy to the teachers running the contest. Each finished program scored 100 points, minus 10 points for every submission that failed with the secret test data, and minus 1 point for every 10 minutes that elapsed.

I have no idea if young people still do this kind of thing, but it was awesome. It was social, it was practical, it was competitive… and it ended with pizza from Hungry Howie’s, so that’s always a win.

Maybe we don’t need these kinds of experiences for young programmers today. Maybe a contrived contest is irrelevant when a young person can compete with the rest of the world by writing an app and putting it on the App Store, or by putting up a web page with all manner of JavaScript trickery and bling. Still, it’s a danger to get too tied to the concretes of today, the specifics of CSS animations and App Store code-signing misery. Early academic exercises like earning to count in hex, even if it’s to score points on a quiz, will likely pay off later.

A Supposedly Fun Thing I’ll Never Do Again

So, I spent first five months of this year on a grueling, panic-driven-development project on Mac OS X. As my longest single Mac engagement, I’m afraid this can’t help but wear down my enthusiasm for the platform.

It doesn’t help that what I was working on was well into the edge-case realm: our stuff needed to silently update itself in the background, which gets into the management of daemons, starting and stopping them at will. This isn’t bad in theory, but getting it to work consistently across 10.4 through 10.6 is grueling. An always-on daemon sets a RunAtLoad property in its /Library/LaunchDaemons plist in order to come up immediately and stay up. Uninstalling and/or replacing such a daemon is tricky, as launchd keeps track of what daemons it has loaded and thinks are still running. The worst thing to do is to just kill the process… instead, you need to use launchctl to load or unload the daemon as needed. And, as a root-owned process, you need to perform this launchctl as root. Oh, and you’d better not delete the plist before you unload the daemon, since launchctl takes the path to the plist, so doing things in the wrong order can leave you with rogue daemons you can’t unload. And if you don’t unload the old daemon, a new launchctl load does nothing, as launchd thinks the original daemon is still running.

Now throw in some user agents. These are like daemons, except that they run for each user at login, and are owned by the user. So to uninstall or update, you need to launchctl unload as each user. Which is possible with sudo -u username in a shell script, unless you’re on 10.4 and you get the user list from /usr/bin/users, as the 10.4 version truncates user names to 8 characters, breaking the sudo.

Oh, and who’s doing these unloads? A script in an .mpkg installer. Which is a whole ‘nother bundle of fun, given how fabulously broken is Package Maker, Apple’s utility for creating .pkg and .mpkg installers. Package Maker crashes frequently, doesn’t consistently persist (especially when source-controlled) settings for file ownership, and creates broken installers when invoked by the command-line utility /usr/bin/packagemaker, making it utterly unsuitable for use in Makefiles or other automated build processes. IMHO, Package Maker is as big an ass-muffin as I’ve ever seen come out of Cupertino, at least since that quickly-pulled iTunes 2 release that reformatted the host drive.

Working with all these taped-together technologies — desperately trying shell script voodoo in an .mpkg post-install step to make things right — eventually wore me out. And granted, this is an edge case: most Mac apps can be distributed as app bundles without installers, and most installers can require the user to restart if it’s making radical changes like installing or updating daemons (not an option for me because the installer runs in the background, called by another daemon). Still, it’s enough to make you long for the “curated”, locked-down walled garden of the App Store, whose distribution and update system really does work remarkably well.

With the Mac all but expelled from this week’s WWDC, some pundits are happy to declare the death of the Mac. That’s a silly overstatement, but it is sensible to accept that after a decade of improvement and marketing, the Mac is a mature platform, likely secure at its 10%-or-so market share. What else would we want Apple to do with the Mac? I’d like to see things fixed and cleaned up, dubious legacies cleared out and things set right. And Apple has done that. Problem is, the result is not Mac OS X 10.7, it’s iOS 4.

But then again, if the iPad replaces the laptop for some number of users, this is a brilliant way to grow Apple’s share — nibbling away at traditional computers cannibalizes the Mac somewhat, but chows away mostly at Windows.

Suffice to say I’m very happy to be re-orienting myself to iPhone/iPad/iPod-touch development. It’s got the feel of a somewhat clean start, based on proven technology, but not burdened by old breakage. Now I’ve got some catching up to do to adapt to all the changes in iOS 4.

With great power comes… not a lot of guidance

Haven’t blogged a lot here about technical matters since leaving java.net and going entirely indie. Right now, it’s all about the writing: I’m working on another Core Audio article, to be immediately followed by an intense push to rework the Prags’ iPhone book, such that it will launch as an iPhone SDK 3.0 book. Rather than just tacking on a “new in 3.0” chapter, we’re trying to make sure that the entire book is geared to the 3.0 developer. How serious are we about this? Well, as a first step, I’m throwing out an entire chapter, the one that was hardest to write, and much of another, because 3.0 provides simpler alternatives for the majority of developers. Don’t worry, there’s a whole new chapter coming on that topic, and another on a closely related topic… ugh, this is going to make a lot more sense when the NDA drops and I can point to APIs directly.

There’s a second book coming, approved by the prags but currently not more than an outline and some good vibes. I think I’ll be doing my first editorial meeting about it with Daniel later today. More on that when the time comes, but here’s a teasing description: it’s an iPhone book that’s widely needed, though few people realize it yet.

Time permitting, I’d like to get some real work done on a podcast studio app for iPhone, something I mentioned back on that Late Night Cocoa interview last year. There’s one feature in 3.0 that could make this a jaw-dropper, but it’s going to take a lot of experimentation (and, quite probably, the purchase of one or more new devices).

Right now, though, I’m really stuck in the mire of Core Audio, for the second of a series of articles on the topic. I’d been racing along just fine, converting an audio synthesis example from the first article into a version that uses the AUGraph and MultiChannelMixer, but now I’m suffering trying to incorporate input from the microphone. Using analogies to the Mac version of CoreAudio, the CAPlayThrough example suggests that samples must be manually copied from an input callback and buffered until a render callback requests them, but a tantalizing hint from Bill made me think that such low-level drudgery was unnecessary. Unfortunately, I cannot make it work, and after several days, I am now reverting to moving the samples between render callbacks myself.

Core Audio is hard. Jens Alfke nailed it in a reply on coreaudio-api:

“Easy” and “CoreAudio” can’t be used in the same sentence. 😛 CoreAudio is very powerful, very complex, and under-documented. Be prepared for a steep learning curve, APIs with millions of tiny little pieces, and puzzling things out from sample code rather than reading high-level documentation.

The fact that it’s underdocumented is, of course, why there are opportunities for people like me to write about it. Drudgery and dozens of failed experiments aside, that is what makes it interesting: it’s a fascinating problem domain, with powerful tools that do awesome stuff, if you can just get everything lined up just right.

One of the things that makes it hard to learn is that, like a lot of Apple’s C APIs, it doesn’t make its functionality obvious through function names. To keep things flexible and expandable, much of the functionality comes from components that you create dynamically (to wit, you say “give me a component of type output, subtype RemoteIO, made by Apple” to get the AudioUnit that abstracts the audio hardware). These largely untyped components don’t have feature specific function names, and accomplish a lot of their functionality by getting and setting properties. As a result, the API documentation doesn’t really tell you much of what’s possible: all you see is functions for discovering components and setting their properties, not how these combine to process audio. The real secrets, and the potential, is only revealed by working through what documentation is available (much of it for the Mac, which is about 90% applicable for iPhone), and looking for guidance from sample code. Oh, and Googling… though sometimes you find articles that are more wrong than right.

Core Audio is what was on my mind while writing my penultimate java.net blog about insularity. Despite the struggles of Core Audio, it’s intellectually fascinating and rewarding (like QuickTime was for me a few years prior), and that’s in sharp contrast to the Java pathologies of reinventing the same things over and over again (holy crap, how many app server projects do we need?), and getting off on syntactic sugar (closures, scripting languages on the VM), all of which increasingly seem to me like an intellectual circle jerk. What’s fundamentally new that you can do in Java? For a media guy like me, there’s nothing. JavaFX’s media API is far shallower than the abandoned Java Media Framework, and you have to learn a new language to use it. Just so you can use an API where the only method in the Track class is name.

By comparison the iPhone OS 3.0 Sneak Peek presentation announced not only a peer-to-peer voice chat API, but indicated that the Audio Unit used by that feature will be usable directly. In other words, it’s theoretically possible to incorporate the voice chat system with all the other powerful functionalities Core Audio provides. The potential is enormous.

And the only problem is figuring out how to connect AUs in a way that doesn’t throw back one of the various bad -10xxx response codes that seem virtually arbitrary at times (but often do make sense later if you manage work through them).

And with that, I’m back to figuring out what’s wrong with my code to pass the AudioBufferList from RemoteIO’s input callback to the MultiChannelMixer’s render callback.

The iPhone recruiting comedy begins

Companies are staffing up iPhone projects, and as always happens with new technologies, they’re requiring levels of expertise that are highly implausible, if not downright impossible, for such new stuff.

Here’s a recent example:

1. 3-5 years previous experience developing application using Apple’s implementation of Objective C.
2. Provide at least two business applications that were developed for the Mac using Objective C.
3. At least 1 year experience development experience on the iPhone device.
4. Provide either a prototype or delivered iPhone resident application that displays data pulled from a remote Java class running on either WEBLogic or WebShere app server leveraging a backend Oracle DB.

Um, yeah. Thoughts on these points:

  1. What is this fixation that people have with Objective-C? The relevant skill-set that can be transferred from the Mac is Cocoa and the other essential frameworks. The syntax of Obj-C can be mastered in a day by any professional developer. If you’re looking for experience, then what takes time to learn is finding your way around the various frameworks, and developing a feel for the Cocoa way of doing things (design patterns like notification and KVC/KVO, etc.). The one key Obj-C thing you have to learn is reference-counting memory management, and the OS X developers can now opt out of that via garbage collection.

  2. Narrowing your focus to people who’ve delivered two shipping business apps for OS X probably gets you into the low thousands, if not hundreds, of eligible developers, world-wide. This one is just silly.

  3. The iPhone SDK was released March 6, 2008, meaning Friday is the first anniversary. Until then, the only people who can answer yes to this requirement are jailbreakers or liars.

  4. You’ve heard of web apps, right? They generally exchange data in HTML, XML, or JSON (or just with the HTTP request itself), so it generally doesn’t matter what language, app server, or database is running on the backend. In fact, I think most architects today would consider it a mistake for a client to know, care, or depend on what was actually running on the backend.

So, um, good luck with the recruiting, guys. If wishes were horses…