Rss

Archives for : opensource

Fleeting Moments of Android Envy

They don’t happen often, but sometimes I do get envious of things over on Android, though I don’t imagine for an instant that there aren’t Android developers thinking the same of iOS (heaven help you if you try to do media on that platform). A few recent examples…

Continue Reading >>

Thing Is, APIs *Should* Be Copyrightable

A bunch of my friends, particularly on the F/OSS and Android side, are issuing a new call to the barricades to make the case that APIs should not be copyrightable. In particular, the EFF wants developers to send in stories of how they’ve reimplemented APIs for reasons of competition, interoperability, innovation, etc. The issue is heating up again because a three-judge Federal Circuit panel is going to revisit Judge Aslip’s ruling in Oracle v. Google, where the jury found that Google willfully infringed Oracle’s copyright on the Java APIs, but the Judge found that APIs aren’t copyrightable in the first place, rendering the jury decision moot.

This isn’t the slam dunk some people think it is. During the trial, Florian Mueller pulled up relevant case law to show that copyright has traditionally considered the design of computer code (and, implicitly, its public interfaces) to be protected.

Furthermore, the case against copyrightability of APIs strikes me as quite weak. If software deserves copyright at all — and there are good arguments against it, but that’s not what we’re talking about here — then drawing the line at published interfaces doesn’t hold up.

There are basically two arguments I’ve heard against API copyrightability. Here’s why I think they’re bunk:

Continue Reading >>

An Encoder Is Not A State Machine

I’m pleasantly surprised that Google’s removal of H.264 from Chrome in favor of WebM has been greeted with widespread skepticism. You’d think that removing popular and important functionality from a shipping product would be met with scorn, but when Google wraps itself with the “open” buzzword, they often seem to get a pass.

Ars Technica’s Google’s dropping H.264 from Chrome a step backward for openness has been much cited as a strong argument against the move. It makes the important point that video codecs extend far beyond the web, and that H.264’s deep adoption in satellite, cable, physical media, and small devices make it clearly inextricable, no matter how popular WebM might get on the web (which, thusfar, is not much). It concludes that this move makes Flash more valuable and viable as a fallback position.

And while I agree with all of this, I still find that most of the discussion has been written from the software developer’s point of view. And that’s a huge mistake, because it overlooks the people who are actually using video codecs: content producers and distributors.

And have they been clamoring for a new codec? One that is more “open”? No, no they have not. As Streaming Media columnist Jan Ozer laments in Welcome to the Two-Codec World,

I also know that whatever leverage Google uses, they still haven’t created any positive reason to distribute video in WebM format. They haven’t created any new revenue opportunities, opened any new markets or increased the size of the pie. They’ve just made it more expensive to get your share, all in the highly ethereal pursuit of “open codec technologies.” So, if you do check your wallet, sometime soon, you’ll start to see less money in it, courtesy of Google.

I’m grateful that Ozer has called out the vapidity of WebM proponents gushing about the “openness” of the VP8 codec. It reminds me of John Gruber’s jab (regarding Android) that Google was “drunk on its own keyword”. What’s most atrocious to me about VP8 is that open-source has trumped clarity, implementability, and standardization. VP8 apparently only exists as a code-base, not as a technical standard that could, at least in theory, be re-implemented by a third party. As the much-cited first in-depth technical analysis of VP8 said:

The spec consists largely of C code copy-pasted from the VP8 source code — up to and including TODOs, “optimizations”, and even C-specific hacks, such as workarounds for the undefined behavior of signed right shift on negative numbers. In many places it is simply outright opaque. Copy-pasted C code is not a spec. I may have complained about the H.264 spec being overly verbose, but at least it’s precise. The VP8 spec, by comparison, is imprecise, unclear, and overly short, leaving many portions of the format very vaguely explained. Some parts even explicitly refuse to fully explain a particular feature, pointing to highly-optimized, nigh-impossible-to-understand reference code for an explanation. There’s no way in hell anyone could write a decoder solely with this spec alone.

Remember that even Microsoft’s VC-1 was presented and ratified as an actual SMPTE standard. One can also contrast the slop of code that is VP8 with the strategic designs of MPEG with all their codecs, standardizing decoding while permitting any encoder that produces a compliant stream that plays on the reference decoder.

This matters because of something that developers have a hard time grasping: an encoder is not a state machine. Meaning that there need not, and probably should not be, a single base encoder. An obvious example of this is the various use cases for video. A DVD or Blu-Ray disc is encoded once and played thousands or millions of times. In this scenario, it is perfectly acceptable for the encode process to require expensive hardware, a long encode time, a professional encoder, and so on, since those costs are easily recouped and are only needed once. By contrast, video used in a video-conferencing style application requires fairly modest hardware, real-time encoding, and can make few if any demands of the user. Under the MPEG-LA game-plan, the market optimizes for both of these use cases. But when there is no standard other than the code, it is highly unlikely that any implementations will vary much from that code.

Developers also don’t understand that professional encoding is something of an art, that codecs and different encoding software and hardware have distinct behaviors that can be mastered and exploited. In fact, early Blu-Ray discs were often authored with MPEG-2 rather than the more advanced H.264 and VC-1 because the encoders — both the devices and the people operating them — had deeper support for and a better understanding of MPEG-2. Assuming that VP8 is equivalent to H.264 on any technical basis overlooks these human factors, the idea that people now know how to get the most out of H.264, and have little reason to achieve a similar mastery of VP8.

Also, MPEG rightly boasts that ongoing encoder improvements over time allow for users to enjoy the same quality at progressively lower bitrates. It is not likely that VP8 can do the same, so while it may be competitive (at best) with H.264, it won’t necessarily stay that way.

Furthermore, is the MPEG-LA way really so bad? Here’s a line from a review in the Discrete Cosine blog of VP8 back when On2 was still trying to sell it as a commercial product:

On2 is advertising VP8 as an alternative to the mucky patent world of the MPEG licensing association, but that process isn’t nearly as difficult to traverse as they imply, and I doubt the costs to get a license for H.264 are significantly different than the costs to license VP8.

The great benefit of ISO standards like VC-1 and H.264 is that anyone can go get a reference encoder or reference decoder, with the full source code, and hack on their own product. When it times come to ship, they just send the MPEG-LA a dollar (or whatever) for each copy and everyone is happy.

It’s hard to understand what benefits the “openness” of VP8 will ever really provide. Even if it does end up being cheaper than licensing H.264 from MPEG-LA — and even if the licensing body would have demanded royalty payments had H.264 not been challenged by VP8 — proponents overlook the fact that the production and distribution of video is always an enormously expensive endeavor. 15 years ago, I was taught that “good video starts at a thousand dollars a minute”, and we’d expect the number is at least twice that today, just for a minimal level of technical competence. Given that, the costs of H.264 are a drop in the bucket, too small to seriously affect anyone’s behavior. And if not cost, then what else does “openness” deliver? Is there value in forking VP8, to create another even less compatible codec?

In the end, maybe what bugs me is the presumption that software developers like the brain trust at Google know what’s best for everyone else. But assuming that “open source” will be valuable to video professionals is like saying that the assembly line should be great for software development because it worked for Henry Ford.

The Dark Depths of iOS

CodeMash starts Wednesday in Sandusky, with half-day iOS tutorials from Daniel Steinberg and myself, followed by two days of sessions. My Thursday session is The Dark Depths of iOS, and is a rapid-fire tour of the non-obvious parts of the iOS APIs.

Researching for this has proven an interesting exercise. I had the idea for the talk from the occasional dive into Core Foundation for functionality that is not exposed at higher levels of the iOS and Mac stacks. A simple example of this is the CFUUID, the Universally unique identifier defined by RFC 4122. More than once, I’ve needed an arbitrary unique ID (other than, say, the device ID), and am happy to use the industry standard. But it’s not defined in Foundation or Cocoa, so you need to use Core Foundation and its C API.

Another example I knew about before starting this talk was the CFNetwork sub-framework, which provides a much more complete networking stack than is available in Cocoa’s URL Loading System. CFNetwork allows you make arbitrary socket connections, work with hosts (e.g., to do DNS lookups), accept connections, etc. Basically, if what you need from the network can’t be expressed as a URL, you need to drop down at least to this level. It has an advantage over traditional BSD sockets in that it integrates with the Core Foundation view of the world, most importantly in that its reading and writing APIs use the asynchronous callback design patterns common to Apple’s frameworks, rather than blocking as the traditional C APIs would.

Stuff like those, along with Accelerate and Keychain, are things I knew I wanted to talk about, and did the first few slides on Core Media and Core Services by pointing out reasonably findable information from Apple’s architecture docs.

The eye-openers for me were down in the “System” level of Core OS… the various C APIs that iOS inherits from its open-source roots in FreeBSD and NetBSD (i.e., Darwin). These aren’t substantially documented in Xcode (though they do enjoy syntax highlighting and sometimes code-completion). The documentation for these is in the man pages, which of course makes sense for long-time Unix programmers, but maybe less so today… if I’m writing in Xcode for a separate iOS device, the Mac command-line is a rather counterintuitive location to look for API documentation, isn’t it?

So the trick with calling the standard C libraries is finding out just what’s available to you. Apple has links to iOS Manual Pages that collect a lot of these APIs in one place, but they are largely unordered (beyond the historical Unix man page sections), and being programmatically generated, Apple can only offer a warning that while some APIs known to be unavailable on iOS have been filtered out, the list isn’t guaranteed to be completely accurate. There’s also a consideration that some of these APIs, while they may exist, are not particularly useful given the limitations under which third-party iOS applications operate. For example, all the APIs involving processes (e.g., getting your PID and EUID) and inter-process communication are presumably only of academic interest — the iPhone is not the campus timeshare mainframe from 1988. Similarly, ncurses is probably not going to do much for you on a touch display. OK, maybe if you’re writing an ssh client. Or if you really need to prove that Angry Birds could have worked on a VT100.

Another way of figuring out what’s there — if less so how to actually call it — is to go spelunking down in <iOS_SDK>/usr/lib and <iOS_SDK>/usr/include to get an idea of the organization and packaging of the standard libraries. Had I not done this, I might not have realized that there is a C API for regular expression matching (regex.h), XML parsing with libxml2 (both DOM and SAX), zip and tar, MD5 and SHA (in CommonCrypto/) and other interesting stuff.

On the other hand, there are cases where code libraries are available but don’t have public headers. For example, libbz2.dylib are libtidy.dylib are in usr/lib but don’t seem to have corresponding entries in usr/include. That begs the question of whether you could call into these BZip and Tidy libraries and remain “App Store safe”, given the apparent lack of a “public” API, even though you could easily get the headers for these open-source libraries from their host projects. Heck, kebernet pointed out to me that tidy.h and bzlib.h are available in the iPhoneSimulator.platform path, just not iPhoneOS.platform.

It would be nice if there were better visibility into these libraries, though their nature as unrepentant C probably scares off a lot of developers, who will be just as well off scouring Google Code for a nice Objective-C alternative (provided the license they find is compatible with their own). My takeaway is that there’s even more functionality at these low levels than I expected to find. I’ll probably at least consider using stuff like libxml2 and regex.h in the future.

Life Beyond The Browser

“Client is looking for someone who has developed min. of 1 iPhone/iPad app.  It must be in the App Store no exceptions.  If the iPhone app is a game, the client is not interested in seeing them.” OK, whatever… I’ll accept that a game isn’t necessarily a useful prerequisite. But then this e-mail went on: “The client is also not interested in someone who comes from a web background or any other unrelated background and decided to start developing iPhone Apps.”

Wow. OK, what the hell happened there? Surely there’s a story behind that, one that probably involved screaming, tears, and a fair amount of wasted money. But that’s not what I’m interested in today.

What surprised me about this was the open contempt for web developers, at least those who have tried switching over to iOS development. While I don’t think we’re going to see many of these “web developer need not apply” posts, I’m still amazed to have seen one at all.

Because really, for the last ten years or so, it’s all been about the web. Most of the technological innovation in the last decade arrived in the confines of the browser window, and we have been promised a number of times that everything would eventually move onto the web (or, in a recent twist, into the cloud).

But this hasn’t fully panned out, has it? iOS has been a strong pull in the other direction, and not because Apple wanted it that way. When the iPhone was introduced and the development community given webapps as the only third-party development platform, the community reaction was to jailbreak the device and reverse-engineer iPhone 1.0’s APIs.

And as people have come over, they’ve discovered that things are different here. While the 90’s saw many desktop developers move to the web, the 10’s are seeing a significant reverse migration. In the forums for our iPhone book, Bill and I found the most consistently flustered readers were the transplanted web developers (and to a lesser degree, the Flash designers and developers).

Part of this was language issues. Like all the early iPhone books, we had the “we assume you have some exposure to a C-based curly-brace language” proviso in the front. Unfailingly, what tripped people up was the lurking pointer issues that Objective-C makes no attempt to hide. EXC_BAD_ACCESS is exactly what it says it is: an attempt to access a location in memory you have no right to touch, and almost always the result of following a busted pointer (which in turn often comes from an object over-release). But if you don’t know what a pointer is, this might as well be in Greek.

And let’s think about languages for a minute. There has been a lot of innovation around the web programming languages. Ruby and Python have (mercifully) replaced Perl and PHP in a lot of the conventional wisdom about web programming languages, while the Java Virtual Machine provides a hothouse for new language experimentation, with Clojure and Scala picking gaining some very passionate adherents.

And yet, none of these seem to have penetrated desktop or device programming to any significant degree. If the code is user-local, then it’s almost certainly running in some curly-brace language that’s not far from C. On iOS, Obj-C/C/C++ is the only provided and only practical choice. On Mac, Ruby and Python bindings to Cocoa were provided in Leopard, but templates for projects using these languages no longer appear in XCode’s “New Project” dialog in Snow Leopard. And while I don’t know Windows, it does seem like Visual Basic has finally died off, replaced by C#, which seems like C++ with the pointers taken out (i.e., Java with a somewhat different syntax).

So what’s the difference? It seems to me like the kinds of tasks relevant to each kind of programming is more different than is generally acknowledged. In 2005’s Beyond Java, Bruce Tate argued that a primary task of web development was mostly about doing the same thing over and over again: connecting a database to a web page. You can snip at specifics, but he’s got a point: you say “putting an item in the user’s cart”, I say “writing a row to the orders table”.

If you buy this, then you can see how web developers would flock to new languages that make their common tasks easier — iterating over collections of fairly rich objects in novel and interesting ways has lots of payoff for parsing tree structures, order histories, object dependencies and so on.

But how much do these techniques help you set up a 3D scene graph, or perform signal processing on audio data captured from the mic? The things that make Scala and Ruby so pleasant for web developers may not make much of a difference in an iOS development scenario.

The opposite is also true, of course. I’m thrilled by the appearance of the Accelerate framework in iOS 4, and Core MIDI in 4.2… but if I were writing a webapp, a hardware-accelerated Fast Fourier Transform function likely wouldn’t do me a lot of good.

I’m surprised how much math I do when I’m programming for the device. And not just for signal processing. Road Tip involved an insane amount of trigonometry, as do a lot of excursions into Core Animation.

The different needs of the different platforms create different programmers. Here’s a simple test: which have you used more in the last year: regular expressions, or trigenometry? If it’s the former, you’re probably a web developer; the latter, device or desktop. (If you’ve used neither, you’re a newbie, and if you’ve used both, then you’re doing something cool that I probably would like to know about).

Computer Science started as a branch of mathematics… that’s the whole “compute” part of it after all. But times change; a CS grad today may well never need to use a natural logarithm in his or her work. Somebody — possibly Brenda Laurel in Computers as Theatre (though I couldn’t find it in there) — noted that the French word for computer, ordinateur, is a more accurate name today, being derived from root word for “organize” rather than “compute”.

Another point I’d like to make about webapps is that they’ve sort of dominated thinking about our field for the last few years. The kind of people you see writing for O’Reilly Radar are almost always thinking from a network point of view, and you see a lot of people take the position that devices are useful only as a means of getting to the network. Steve Ballmer said this a year ago:

Let’s face it, the Internet was designed for the PC. The Internet is not designed for the iPhone. That’s why they’ve got 75,000 applications — they’re all trying to make the Internet look decent on the iPhone.

Obviously I disagree, but I bring it up not for easy potshots but to bolster my claim that there’s a lot of thinking out there that it’s all about the network, and only about the network.

And when you consider a speaker’s biases regarding the network versus devices operating independently, you can notice some other interesting biases. To wit: I’ve noticed enthusiasm for open-source software is significantly correlated with working on webapps. The most passionate OSS advocates I know — the ones who literally say that all software that matters will and must eventually go open-source (yes, I once sat next to someone who said exactly that) — are webapp developers. Device and desktop developers tend to have more of nuanced view of OSS… for me, it’s a mix of “I can take it or leave it”, and “what have you done for me lately?” And for non-programmers, OSS is more or less irrelevant, which is probably a bad sign, since OSS’ arrival was heralded by big talk of transparency and quality (because so many eyes would be on the code), yet there’s no sense that end-users go out of their way to use OSS for any of these reasons, meaning they either don’t matter or aren’t true.

It makes sense that webapp developers would be eager to embrace OSS: it’s not their ox that’s being gored. Since webapps generally provide a service, not a product, it’s convenient to use OSS to deliver that service. Webapp developers can loudly proclaim the merits of giving away your stuff for free, because they’re not put in the position of having to do so. It’s not like you can go to code.google.com and check out the source to AdWords, since no license used by Google requires them to make it available. Desktop and device developers may well be less sanguine about the prospect, as they generally deliver a software product, not a service, and thus don’t generally have a straightforward means of reconciling open source and getting paid for their work. Some of the OSS advocates draw on webapp-ish counter-arguments — “sell ads!”, “sell t-shirts!”, “monetize your reputation” (whatever the hell that means) — but it’s hard to see a strategy that really works. Java creator James Gosling nails it:

One of the key pieces of the linux ideology that has been a huge part of the problem is the focus on “free”. In extreme corners of the community, software developers are supposed to be feeding themselves by doing day jobs, and writing software at night. Often, employers sponsor open-source work, but it’s not enough and sometimes has a conflict-of-interest. In the enterprise world, there is an economic model: service and support. On the desktop side, there is no similar economic model: desktop software is a labor of love.

A lot of the true believers disagree with him in the comments. Then again, in searching the 51 followups, I don’t see any of the gainsayers beginning their post with “I am a desktop developer, and…”

So I think it’s going to be interesting to see how consensus and common wisdom industry changes in the next few years, as more developers move completely out of webapps and onto the device, the desktop, and whatever we’re going to call the things in between (like the iPad). That the open source zealots need to take a hint about their precarious relevance is only the tip of the iceberg. There’s lots more in play now.

shoes[1].drop();

I’m late to the party blogging about Apple deprecating its Java for Mac OS X, with particularly good posts up from Matt Drance and Lachlan O’Dea. And I’ve already posted a lot of counterpoints on the Java Posse’s Google group (see here, here, and here). So let me lead with the latest: there’s now a petition calling on Apple to contribute its Mac Java sources to OpenJDK. This won’t work, for at least four reasons I can think of:

  • Petitions never work.
  • Apple doesn’t listen.
  • Apple is a commercial licensee of Java. The terms in their contract with Sun/Oracle almost certainly prohibit open-sourcing their Sun-derived code.
  • Even if it could be open-sourced, Apple’s code was developed for years with the assumption it was proprietary code. The company would want nothing less than an absolutely forensic code analysis to ensure there are no loopholes, stray imports or links, or anything else that a creative FSF lawyer could use to claim that Mac OS X links against the GPL’ed OpenJDK and must therefore itself be GPL’ed. Apple has nothing to earn and everything to lose by open-sourcing its JDK, so don’t hold your breath.

It’s also frustratingly typical that many of the signatories of this petition have spent the last week blogging, tweeting, podcasting, and posting that they will never buy another Apple product (and they’re gonna go tell all their friends and relations that Apple sucks now, just for good measure). Here’s the thing: companies generally try to do right by their customers. When you assert that you will never be their customer again, you remove any reason the company would have to listen to your opinion. Really, this much should be common sense, right?

Buried in all the denunciations of “control freak Steve Jobs” and his nefarious skullduggery is a wake-up call that Oracle and the Java community need to hear: one of your biggest commercial licensees, the second biggest US corporation by market cap, doesn’t think licensing Java will help them sell computers anymore. Why does nobody take this screamingly obvious hint?

I imagine part of the reason that the activists want Apple to contribute its code instead of letting it go to waste is that they’ve realized what a tall order it would be for the community to attempt a port on its own. Java is huge, and its native dependencies are terribly intractable: even the vaunted Linux community couldn’t get Blackdown Java up to snuff, and Sun came to the rescue with an official version for Linux. How likely is it that we’ll find enough people who know Java and Mac programming well enough — and who care to contribute their time — to do all this work for free? It may well be a non-starter. Landon Fuller got the headless bits of JDK 6 ported to OS X ported in a few weeks in the form of the Soy Latte project, but had to settle for an X11-based UI, and his announcement asked for help with Core Audio sound support and OS X integration in general… and I don’t believe any help ever materialized.

Aside: when volunteers aren’t enough, the next step is to get out the checkbook and call in mercenaries. I looked at javax.sound yesterday and estimated it would take me 4-6 weeks, full-time, to do a production-quailty port using Core Audio. I’ll bid the project out at $20,000. Sign a contract and I’ll contribute the sources to any project you like (OpenJDK, Harmony, whatever). You know where to reach me. And I’m not holding my breath.

The real problem with porting Java is that Java’s desktop packages – AWT, Swing, javax.sound, etc. – are very much a white elephant, one which perfectly fits Wikipedia’s definition:

A white elephant is an idiom for a valuable possession of which its owner cannot dispose and whose cost (particularly cost of upkeep) is out of proportion to its usefulness or worth.

As I’ve established, Java’s desktop packages are egregiously expensive. In fact, with Apple’s exit, it’s not clear that there’s anybody other than Oracle delivering a non-X11 AWT/Swing implementation for any platform: it’s just too much cost and not enough value. End-user Desktop Java applications are rare and get rarer every day, displaced largely by browser-based webapps, but also by Flash and native apps.

We know the big use for AWT and Swing, and it’s a terrible irony: measured by app launches or time spent in an app, the top AWT/Swing apps are surely NetBeans and IntelliJ, IDEs used for creating… other Java applications! The same can be said of the SWT toolkit, which powers the Eclipse IDE and not much else. This is what makes this white elephant so difficult to dispose of: all the value of modern-day Java is writing for the server, but nearly all the Java developers are using the desktop stuff to do so, making them the target market (and really the only market) for Desktop Java. If there were other viable Java applications on the desktop, used by everyday end-users, Apple couldn’t afford to risk going without Java. There aren’t, and it can.

There’s lots of blame to go around, not the least of which should be directed at Sun for abandoning desktop Java right after Swing 1.0 came out. It’s embarrassing that my five-year-old Swing book is more up-to-date with its technology than my year-old-iPhone book is, but it illustrates the fact that Apple has continued to evolve iOS, while Sun punted on client-side Java for years, and then compounded the problem by pushing aside the few remaining Swing developers in favor of the hare-brained JavaFX fiasco. Small wonder that nearly all the prominent desktop Java people I know are now at Google, working on Android.

Other languages are easier to port and maintain because Ruby, Python, and the like don’t try to port their own entire desktop API with them from platform to platform. Again, the irony is that this stuff is so expensive in Java, and little related to the server-side work where Java provides nearly all of its value. I’m not the first to say it, but Oracle would do itself a huge favor by finding a way to decouple all this stuff from the parts of Java that actually get used, likely as a result of Project Jigsaw. Imagine something like this:

Splitting desktop stuff out of Java SE

In this hypothetical arrangement, the desktop packages are migrated out of Java SE, which now contains the baseline contents of Java that everybody uses: collections, I/O, language utilities, etc. These are the parts that EE actually uses, and that dependency stays in place. I’ve colored those boxes green to indicate that they don’t require native widgets to be coded. What does require that is the new Java “Desktop Edition” (“Java DE”), which is SE plus AWT, Swing, Java2D, javax.sound, etc. For the rare case where you need UI stuff on the server, like image manipulation as a web service, combine EE and DE to create “Java OE”, the “Omnibus edition”. Also hanging off SE are ME (which is SE minus some features, and with a new micro UI), as well as non-Sun/Oracle products that derive from SE today, such as Eclipse’s quasi-platform, and Android. Of course, it’s easy to play armchair architect: the devil is in the details, and Mark Reinhold mentioned on Java Posse 325 that there are some grievously difficult dependencies created by the unfortunate java.beans package. Still, when the cost of the desktop packages is so high, and their value so low, they almost certainly need to be moved off the critical path of Java’s evolution somehow. The most important thing Oracle needs to provide in Java 7 or 8 is an exit strategy from desktop Java.

Instead, we’ll continue to hear about what an utter rat-bastard Steve Jobs is, and how the deprecation of Apple’s Java is part of a scheme to “force developers to use Objective-C”, as if there were a flood of useful and popular Java applications being shoved off the Mac platform. Many of the ranters insist on dredging up Jobs’ vow to “make the Mac the best Java platform”, in determined ignorance of the fact that this statement was made over ten years ago, at JavaOne 2000. For context: when Jobs was on the JavaOne stage, the US President was Bill Clinton, and Sun was #150 on the Fortune 500, ahead of Oracle and Apple (and Google, which didn’t even enter the list until 2005). If TV’s Teen Titans taught us anything, it’s that Things Change.

And for a while, maybe the Mac was briefly the best Java platform: unlike Windows (legally enjoined from shipping Java by their attempts to subvert it) and Linux (where many distros turned up their noses at un-free Java for years), Apple shipped Java as a core part of the OS for years. Not an option: it was installed as part of the system and could not practically be removed. Apple’s Mac look-and-feel for Swing was also widely praised. Do you know who said the following?

I use the MAC because it’s a great platform. One of the nice things about developing in Java on the Mac is that you get to develop on a lovely machine, but you don’t cut yourself off from deploying on other platforms. It’s a fast and easy platform to develop on. Rock solid. I never reboot my machine… Really! Opening and closing the lid on a Powerbook actually works. The machine is up and running instantly when you open it up. No viruses. Great UI. All the Java tools work here: NetBeans and JEdit are the ones I use most. I tend to think of OSX and Linux with QA and Taste.

It was Java creator James Gosling, in a 2003 blog entry. As you might imagine, Apple dumping Java seven years later has him singing a different tune.

A lot of us who’ve developed Java on the Mac have expected this for years, and this week’s reactions are all a lot of sound and fury, signifying nothing. The Java crowd will keep doing what it does — making web sites — but they’ll have to make a choice: either use Windows or Linux for development, or get away from the IDEs and write their code with plain text editors and the command line on the Mac. If I were still doing Java, I probably would barely notice, as I wrote nearly all my Java code with emacs after about 2000 or so, and never got hooked on NetBeans or Eclipse. But with the most appealing desktops and the most popular mobile devices ditching Java, it’s time for the Java community to face up to the fact that Desktop Java is a ruinously expensive legacy that they need to do something about.

All the angry screeds against Steve Jobs won’t change the fact that this is the “ball and chain” that’s pulling Java below the waves.

Maybe they should call the next version of Ubuntu “Compulsive Coyote”

Fanaticism consists in redoubling your effort when you have forgotten your aim.
George Santayana, Life of Reason (1905) vol. 1, Introduction

A few weeks back, I asked the Twitterverse “When did user-facing Linux become completely and officially irrelevant? Seems like a fait accompli, doesn’t it?”. With the OSS community’s reaction to the iPad, I’m now sure of it.

And by reaction, I’m of course thinking of the response by the FSF’s John Sullivan to Steve Jobs’ much-discussed Thoughts on Flash. Sullivan’s piece is a straightforward “pox on both your houses” take, damning Apple and Adobe alike for not adopting the One True Faith of Open Source Software.

I had allowed myself to hope that the OSS community would take the hugely successful launch of the iPad and the flight of developers to the iPhone OS platform, even with Apple’s cravenly self-serving App Store policies, as an opportunity to take a good, long look in the mirror. With iPad sales now already estimated at over a million, you have to wonder how soon use of this nascent platform will exceed that of desktop Linux, if it hasn’t already. What should the open source community think of this profound preference for the unfree over the free? I would think they’d see it as a very public, and very obvious repudiation of the party line, suggestive of an urgent need for some soul-searching and time to re-examine how the OSS community is engaging the world at large.

Sullivan’s piece, sadly, shows no such self-awareness. It’s the usual total detach from real people that we’ve come to expect from True Believers. Indeed, the last third of it is the all-too-familiar zealous denunciation of all proprietary software, calling on readers to adopt Linux and Ogg Theora.

Seriously, Ogg? They’re still clinging to the “everything should be in Ogg” kick? OK, let’s ask the people who should know on this one, content professionals, to see if they buy in at all. DV magazine’s website logs 0 hits in searches for Theora and for Ogg, StreamingMedia.com’s 11 hits (versus 248 for H.264) are largely related to codec politics and not mentions in training articles or product reviews, and legendary retailer B & H assumes the search term theora is a misspelling of “theory”. No matter how many essays and insults the zealots spew forth, they seem unable to generate any inkling to know or care about Theora among video professionals.

In effect, the FSF’s position is to tell the world “screw you for not wanting the stuff we produce”, oblivious to the fact that the marketplace of ideas is very clearly telling them “screw you for not producing the stuff we want.”

The quote at the top of this article was often cited by Chuck Jones in describing the character of Wile E. Coyote, who was never afforded a moment of clarity to reconsider his hunting regimen, the inherent dangers therein, and the prospects for a way of life that didn’t involve falling off cliffs every 30 seconds. I fear the OSS community is so wrapped up in their own self-righteousness, they are similarly immune to equally-needed moments of humility, clarity, and perspective.

Eek! A Patent!

Mike Shaver’s blog about Mozilla forgoing H.264 support in the HTML <video> tag is getting a lot of play, predictably from the /. crowd.

It’s a shame, because it’s a childish and facile argument. Here’s the gist of it.

For Mozilla, H.264 is not currently a suitable technology choice. In many countries, it is a patented technology, meaning that it is illegal to use without paying license fees to the MPEG-LA. Without such a license, it is not legal to use or distribute software that produces or consumes H.264-encoded content.

In short, the very idea that something is patented and requires licensing is prima facie proof that it is intolerable. Going on:

These license fees affect not only browser developers and distributors, but also represent a toll booth on anyone who wishes to produce video content. And if H.264 becomes an accepted part of the standardized web, those fees are a barrier to entry for developers of new browsers, those bringing the web to new devices or platforms, and those who would build tools to help content and application development.

Yeah, can you imagine if any other technology were encumbered by patents? They’d have to pass on the costs to customers too! Imagine if television were patented, or if automobiles involved 100,000 patents… surely those products could never exist and never be affordable.

There is a case to be made against patents and intellectual property as a whole, but this blog doesn’t make it. Instead, it blithely refuses to acknowledge that we do live in a world of IP, decrying its costs as if they are out of the ordinary or unjust. Ultimately, it flees back to the intellectual dead-end of “everything should be in Ogg”, a stance so untenable that even ESR conceded it was a non-starter, four years ago.

A final irony: by refusing to support H.264, Mozilla bolsters the primary alternative for video on the web: the Flash plug-in, which is not just patent-encumbered but proprietary, available only from Adobe and only in those environments where it serves the company’s strategic ends. Shaver, to be fair, admits that proprietary plug-ins are bad too, but declines to say they’re far more bad than patent-encumbered standards. Instead, he holds out for a pollyannaish vision of completely free web video technology, naming Ogg as his moral standard-bearer, without acknowledging Ogg’s lamentable but obvious feet of clay.

Did Open Source Kill Sun?

Some in the Java community are linking to Sun Chairman and Co-Founder Scott McNealy’s comments in the Oracle OpenWorld keynote, in which he wistfully looks back at the soon-to-be-gone Sun and boasts that they:

Kicked butt, had fun, didn’t cheat, loved our customers, changed computing forever

Sorry to bust the warm fuzzies, but we should append that history with a few more words:

and failed anyways

Sun lost money for most of this decade, as its stock fell more than 95%, reaching the point late last year where its market valuation equaled its cash and investments, meaning the market considered the company’s business as having no value whatsoever.

As engineers, we can romanticize Sun’s “good guy” behavior over fine craft beers all night, but at the end of the day, the company ceased to be viable, destroying a great deal of wealth in the process. Sometimes, it seemed like Sun wanted to be the non-profit FSF instead of a publicly-traded company. At least they got the “non-profit” part right.

And clearly understanding Sun’s failure matters because the kinds of things that Sun did are now going to be considered liabilities. Sun tried like crazy to win over the open source community. The community demanded that Sun support Linux, even though Sun would presumably favor its own flavor of Unix, Solaris. But they went along with it… giving companies a reason not to buy Sun hardware and instead lash together cheap Linux boxes, or buy heavy iron from Linux-loving Sun rival IBM. The community demanded that Java be open sourced and, after a series of fits and starts, it finally was, with the ultra-hippie GPL license no less. Ultimately, the community came to believe it had a blank check written against Sun’s engineering resources, as typified in the infamous “changing of the guard” episode of the JavaPosse and the somewhat testy reply.

But what did all these giveaways accomplish? The next time a community goads a company into open sourcing its crown jewels, the critical response may well be “yeah, that worked great for Sun.” In fact, that was pretty much Fake Steve’s take on Sun over the course of Sun’s decline, mocking the company’s giveaways, as it frittered into irrelevance. At the end of the day, how is FSJ not right on this one?

It’s ironic that Sun’s love of the open source community was largely unrequited. As late as 2007, Slashdot founder Rob “CmdrTaco” Malda was still expressing his eternal hatred of Java, and even in GPL form, Java has been slow to win acceptance from the F/OSS types. In an even more ironic twist, Slashdot’s tone has softened lately. For example, a recent article on Android game development quoted its source as saying “While iPhone apps are written in Objective C, the Android SDK uses relatively more programmer-friendly Java.” Why the sudden love for Java? Because it powers Android, the most plausible rival to the iPhone, now telephona non grata to the Slashdot community. In other words, the enemy of my enemy is my friend.

Too late for Sun though, and it’s not clear that a greater acceptance from a community that, by definition, doesn’t like to pay for stuff would even matter anyways. Perhaps the takeaway is that we all need a more realistic attitude about what individuals and companies need to do to continue their existence. Charity is swell, but it’s not necessarily a viable business model.

Again with The Six Steps

About 10 minutes into Java Posse 241, an unconference session on design, Joe Nuxoll almost pulls the usual “developers talking design” chat into a breathtakingly new perspective. Really close. Picking up on an anecdote about 37signals’ creation of Rails as a tool for what they wanted to do, he points out the idea of going back to first principles:

  • What do I want to accomplish?
  • What can I do that will accomplish that?
  • How do I do that?

He points out that not everything has to be a webapp or a library; that there could be a human process that’s more practical (cheaper, effective, etc.) than building something electronic.

And then the conversation goes another direction. But this so reminded me of Scott McCloud’s exceptional metaphor and discussion of “The Six Steps” of the creative process, in Understanding Comics. McCloud presents a linear progression of choices and concerns:

  1. Idea/purpose – What am I trying to accomplish?
  2. Form – What form will my effort take (a comic book, a website, etc.)
  3. Idiom/genre – How do I address the recipient (fiction vs. nonfiction, online community vs. news site). McCloud says this is “the ‘school’ of art, the vocabulary of styles or gestuers or subject matter.”
  4. Structure – How the work is arranged and composed, “what to include, what to leave out” (the latter being a far more important choice than is often realized).
  5. Craft – The actual work of getting the thing done: problem solving, use of skills, etc.
  6. Surface – The immediately-perceivable traits, like polish or production values.

What’s perhaps most breathtaking the first time you read Understanding Comics is McCloud’s narrative which portrays the reality that almost nobody starts with step 1 and proceeds to step 6. In fact, it’s far more common to go backwards: to see just the surface gloss of something and try to mimic that, with no understanding at all of the decisions that inform the rest of the work, and how they depend on each other.

In our realm, this pathology is highly obvious. If McCloud’s version is a kid tracing Wolverine onto lined notebook paper and declaring “I can draw as well as a professional!”, then surely our equivalent is the UI “skin”, like the hackery that can make a Windows or Linux desktop look like Mac OS X, but can’t change the fact that everything below the surface is built up with the respective ideas of those operating systems.

This is why I’m convinced Linux can never succeed as a desktop OS. When I used GNOME as my desktop for a year back in 2001, I commonly complained that the desktop gave me at least a dozen places to set my visual theme, but setting the damned clock still required me to jump into xterm and do sudo date -s ... I haven’t used it since then, but I wonder if they’ve even gotten inter-application copy-and-paste working (to say nothing of drag-and-drop). McCloud’s narrative shows genuine artists eventually working all the way back to step 1 or 2, asking “why am I doing this”, and proceeding forward, making informed and purposeful decisions about idiom, structure, and craft. It’s hard to imagine the Linux community having the wherewithal and discipline to see through such a process, when they’ve proven themselves willing to fork their code at the drop of a hat (or, more accurately, the outbreak of a Kirk-versus-Picard flamewar). The result is something that’s so baroque and self-contradictory it isn’t even necessarily ideal for hackers, and there’s little hope of this community ever deciding to build something their moms can use.

A lot of projects make bad choices of “form”, and doom themselves from the start. In the 90’s, there seemed to be people who believed that every activity worth undertaking should be done in the form of a startup company. As it turned out, fairly few endeavors are well-suited to that approach.

Today, we see people falling over themselves to make everything into an iPhone app, even when it’s not appropriate. If most of the value of your project comes from data provided via servers you own or operate, it’s likely that a webapp is more appropriate than a genuine iPhone app. Clever use of iPhone CSS can produce webapps that behave much like native apps (the Facebook webapp is still as good as its App Store equivalent), can be updated for all users at any time, and can be adapted to other devices without starting over at square one (compare to the challenge of rewriting your iPhone app for Android, Blackberry, Windows Mobile, etc.).

If anything, many of the great iPhone apps are heralding a resurgence of the “productivity app”, which Broderbund founder Doug Carlston defined to Tim O’Reilly as “any application where the user’s own data matters more to him than the data we provide.” Any iPhone app worth writing is going to involve some significant user data, whether that’s classic user data like their mail, their files, audio they’ve recorded… but also data as simple as where the user is when they run the app. After all, the hundreds of thousands of records in the the restaurant-finder app’s database is useless to me; what matters is finding what I like that’s close to where I am. In other words, the data has no value until it gets those crucial inputs: where I am and what I want to eat.

Now here’s an interesting mental exercise: where would what we consider to be “design” fall in McCloud’s six steps? At first, I made a cursory decision that it was just part of step 5 (craft), as it was part of the doing of the work. That’s clearly wrong, as what we think of as design (and I’m mentally considering varying creative exercises I understand, like book writing, screenwriting, TV production, podcasting, software development, etc.) clearly encompasses the step 4 (structure) decisions of how to arrange the content. And it probably touches on step 6 too: deciding whether to use the brushed metal or plain UI look, whether to shoot in SD, HD, or on film, is a design decision as well. But would we consider step 3 (genre/idiom) to be part of design? I think not. I think that’s a decision that precedes design, one that informs just what it is we’re going to be designing (a educational game versus an electronic series of lessons, historical fiction versus documentary, etc.).

Still, I think it’s important not to make too early, too easy assumptions about the major decisions at the front of the process. “Why am I doing this” is the most important question to ask yourself: it shouldn’t be skipped.