Rss

Life Beyond The Browser

“Client is looking for someone who has developed min. of 1 iPhone/iPad app.  It must be in the App Store no exceptions.  If the iPhone app is a game, the client is not interested in seeing them.” OK, whatever… I’ll accept that a game isn’t necessarily a useful prerequisite. But then this e-mail went on: “The client is also not interested in someone who comes from a web background or any other unrelated background and decided to start developing iPhone Apps.”

Wow. OK, what the hell happened there? Surely there’s a story behind that, one that probably involved screaming, tears, and a fair amount of wasted money. But that’s not what I’m interested in today.

What surprised me about this was the open contempt for web developers, at least those who have tried switching over to iOS development. While I don’t think we’re going to see many of these “web developer need not apply” posts, I’m still amazed to have seen one at all.

Because really, for the last ten years or so, it’s all been about the web. Most of the technological innovation in the last decade arrived in the confines of the browser window, and we have been promised a number of times that everything would eventually move onto the web (or, in a recent twist, into the cloud).

But this hasn’t fully panned out, has it? iOS has been a strong pull in the other direction, and not because Apple wanted it that way. When the iPhone was introduced and the development community given webapps as the only third-party development platform, the community reaction was to jailbreak the device and reverse-engineer iPhone 1.0’s APIs.

And as people have come over, they’ve discovered that things are different here. While the 90’s saw many desktop developers move to the web, the 10’s are seeing a significant reverse migration. In the forums for our iPhone book, Bill and I found the most consistently flustered readers were the transplanted web developers (and to a lesser degree, the Flash designers and developers).

Part of this was language issues. Like all the early iPhone books, we had the “we assume you have some exposure to a C-based curly-brace language” proviso in the front. Unfailingly, what tripped people up was the lurking pointer issues that Objective-C makes no attempt to hide. EXC_BAD_ACCESS is exactly what it says it is: an attempt to access a location in memory you have no right to touch, and almost always the result of following a busted pointer (which in turn often comes from an object over-release). But if you don’t know what a pointer is, this might as well be in Greek.

And let’s think about languages for a minute. There has been a lot of innovation around the web programming languages. Ruby and Python have (mercifully) replaced Perl and PHP in a lot of the conventional wisdom about web programming languages, while the Java Virtual Machine provides a hothouse for new language experimentation, with Clojure and Scala picking gaining some very passionate adherents.

And yet, none of these seem to have penetrated desktop or device programming to any significant degree. If the code is user-local, then it’s almost certainly running in some curly-brace language that’s not far from C. On iOS, Obj-C/C/C++ is the only provided and only practical choice. On Mac, Ruby and Python bindings to Cocoa were provided in Leopard, but templates for projects using these languages no longer appear in XCode’s “New Project” dialog in Snow Leopard. And while I don’t know Windows, it does seem like Visual Basic has finally died off, replaced by C#, which seems like C++ with the pointers taken out (i.e., Java with a somewhat different syntax).

So what’s the difference? It seems to me like the kinds of tasks relevant to each kind of programming is more different than is generally acknowledged. In 2005’s Beyond Java, Bruce Tate argued that a primary task of web development was mostly about doing the same thing over and over again: connecting a database to a web page. You can snip at specifics, but he’s got a point: you say “putting an item in the user’s cart”, I say “writing a row to the orders table”.

If you buy this, then you can see how web developers would flock to new languages that make their common tasks easier — iterating over collections of fairly rich objects in novel and interesting ways has lots of payoff for parsing tree structures, order histories, object dependencies and so on.

But how much do these techniques help you set up a 3D scene graph, or perform signal processing on audio data captured from the mic? The things that make Scala and Ruby so pleasant for web developers may not make much of a difference in an iOS development scenario.

The opposite is also true, of course. I’m thrilled by the appearance of the Accelerate framework in iOS 4, and Core MIDI in 4.2… but if I were writing a webapp, a hardware-accelerated Fast Fourier Transform function likely wouldn’t do me a lot of good.

I’m surprised how much math I do when I’m programming for the device. And not just for signal processing. Road Tip involved an insane amount of trigonometry, as do a lot of excursions into Core Animation.

The different needs of the different platforms create different programmers. Here’s a simple test: which have you used more in the last year: regular expressions, or trigenometry? If it’s the former, you’re probably a web developer; the latter, device or desktop. (If you’ve used neither, you’re a newbie, and if you’ve used both, then you’re doing something cool that I probably would like to know about).

Computer Science started as a branch of mathematics… that’s the whole “compute” part of it after all. But times change; a CS grad today may well never need to use a natural logarithm in his or her work. Somebody — possibly Brenda Laurel in Computers as Theatre (though I couldn’t find it in there) — noted that the French word for computer, ordinateur, is a more accurate name today, being derived from root word for “organize” rather than “compute”.

Another point I’d like to make about webapps is that they’ve sort of dominated thinking about our field for the last few years. The kind of people you see writing for O’Reilly Radar are almost always thinking from a network point of view, and you see a lot of people take the position that devices are useful only as a means of getting to the network. Steve Ballmer said this a year ago:

Let’s face it, the Internet was designed for the PC. The Internet is not designed for the iPhone. That’s why they’ve got 75,000 applications — they’re all trying to make the Internet look decent on the iPhone.

Obviously I disagree, but I bring it up not for easy potshots but to bolster my claim that there’s a lot of thinking out there that it’s all about the network, and only about the network.

And when you consider a speaker’s biases regarding the network versus devices operating independently, you can notice some other interesting biases. To wit: I’ve noticed enthusiasm for open-source software is significantly correlated with working on webapps. The most passionate OSS advocates I know — the ones who literally say that all software that matters will and must eventually go open-source (yes, I once sat next to someone who said exactly that) — are webapp developers. Device and desktop developers tend to have more of nuanced view of OSS… for me, it’s a mix of “I can take it or leave it”, and “what have you done for me lately?” And for non-programmers, OSS is more or less irrelevant, which is probably a bad sign, since OSS’ arrival was heralded by big talk of transparency and quality (because so many eyes would be on the code), yet there’s no sense that end-users go out of their way to use OSS for any of these reasons, meaning they either don’t matter or aren’t true.

It makes sense that webapp developers would be eager to embrace OSS: it’s not their ox that’s being gored. Since webapps generally provide a service, not a product, it’s convenient to use OSS to deliver that service. Webapp developers can loudly proclaim the merits of giving away your stuff for free, because they’re not put in the position of having to do so. It’s not like you can go to code.google.com and check out the source to AdWords, since no license used by Google requires them to make it available. Desktop and device developers may well be less sanguine about the prospect, as they generally deliver a software product, not a service, and thus don’t generally have a straightforward means of reconciling open source and getting paid for their work. Some of the OSS advocates draw on webapp-ish counter-arguments — “sell ads!”, “sell t-shirts!”, “monetize your reputation” (whatever the hell that means) — but it’s hard to see a strategy that really works. Java creator James Gosling nails it:

One of the key pieces of the linux ideology that has been a huge part of the problem is the focus on “free”. In extreme corners of the community, software developers are supposed to be feeding themselves by doing day jobs, and writing software at night. Often, employers sponsor open-source work, but it’s not enough and sometimes has a conflict-of-interest. In the enterprise world, there is an economic model: service and support. On the desktop side, there is no similar economic model: desktop software is a labor of love.

A lot of the true believers disagree with him in the comments. Then again, in searching the 51 followups, I don’t see any of the gainsayers beginning their post with “I am a desktop developer, and…”

So I think it’s going to be interesting to see how consensus and common wisdom industry changes in the next few years, as more developers move completely out of webapps and onto the device, the desktop, and whatever we’re going to call the things in between (like the iPad). That the open source zealots need to take a hint about their precarious relevance is only the tip of the iceberg. There’s lots more in play now.

Leave a Reply

Your email address will not be published. Required fields are marked *