Archives for : java

Any Port in a Storm

There goes another one.

That’s @jonathanpenn, as he heads off to Apple. He follows a number of top indie developers/author/speakers to head to the mothership in the last few months, including Patrick Burleson, Kevin Hoctor, and if we’ll go back a little over a year, we can throw in my former iOS SDK Development co-author Bill Dudney.

This is causing a little bit of angst among those of us who hate to see our friends decamp from our communities to California, and to suggest that maybe indie iOS development/writing/speaking isn’t tenable. Janie Clayton-Hasz, whom I’m working with on a soon-to-be-announced project, expresses this from the POV of a newcomer to development life in her latest blog.

Continue Reading >>

Thing Is, APIs *Should* Be Copyrightable

A bunch of my friends, particularly on the F/OSS and Android side, are issuing a new call to the barricades to make the case that APIs should not be copyrightable. In particular, the EFF wants developers to send in stories of how they’ve reimplemented APIs for reasons of competition, interoperability, innovation, etc. The issue is heating up again because a three-judge Federal Circuit panel is going to revisit Judge Aslip’s ruling in Oracle v. Google, where the jury found that Google willfully infringed Oracle’s copyright on the Java APIs, but the Judge found that APIs aren’t copyrightable in the first place, rendering the jury decision moot.

This isn’t the slam dunk some people think it is. During the trial, Florian Mueller pulled up relevant case law to show that copyright has traditionally considered the design of computer code (and, implicitly, its public interfaces) to be protected.

Furthermore, the case against copyrightability of APIs strikes me as quite weak. If software deserves copyright at all — and there are good arguments against it, but that’s not what we’re talking about here — then drawing the line at published interfaces doesn’t hold up.

There are basically two arguments I’ve heard against API copyrightability. Here’s why I think they’re bunk:

Continue Reading >>

Bittersweet Ending

Somewhere between the Happily Ever After and the Downer Ending, the Bittersweet Ending happens when victory came at a harsh price, when, for whatever reason, the heroes cannot fully enjoy the reward of their actions, when some irrevocable loss has happened during the course of the events, and nothing will ever be the same again.

So, Apple wins big in their patent case against Samsung, and reactions are pretty much not what you’d expect. While the Fandroids console themselves with a straw-man claim that “Apple patented rounded rectangles”, writers in iOS circles are hardly delighted. Pre-verdict, Matt Drance wrote “However the verdict falls, I feel like there are no winners here in the long term — certainly not us.” And a day after the verdict, John Gruber’s Daring Fireball hasn’t even mentioned the outcome.

Matt’s concern is giving Apple too much power to control the market, and the verdict likely does that. Following along with The Verge’s liveblog, I noticed that a few devices were found as non-infringing. That’s got to be even worse news for Samsung and Google, because the jury is effectively saying that it is possible to make smartphones without copying Apple, and Samsung largely (and willfully) chose not to. Combine this with the speed at which they reached their conclusions and it’s utterly damning.

And yet, on the other hand, what we’re discussing is patents like “slide to unlock”, which many/most of us think is unworthy of patentability in the first place. And that’s what makes this so uncomfortable: Android’s and particularly Samsung’s copying of Apple was egregious and shameless, but since that itself is not illegal (and how could you even codify that as law?), then does settling for a victory over stuff that probably shouldn’t even be patentable count as a victory at all? Making things worse, the jury had the option of invalidating patents on both sides, and declined to do so on every count.

Then again, what do I know? I thought ripping off the Java programming language and practically the entire API of Java SE was a lot worse, but the court said that was OK. So I guess stealing is bad, except when it’s not.


The Whiny Little Bitch Contingent meets iBooks Author

Let me introduce you to the “Whiny Little Bitch Contingent”. This was a term I coined in the late 2000’s to cover the Java developers who cried and moaned about the slow decline in Apple’s support for Java: the deprecation of the Cocoa-Java bridge, the long wait for Java 6 on Mac OS X, its absence from iOS, etc. Every time there was news on this front, they could be reliably counted on to dredge up Steve Jobs’ pledge at the JavaOne 2000 keynote to make the Mac the best Java programming environment… and to bring this up in seeming ignorance of the passage of many years, the changes in the tech world, the abject failure of Desktop Java, other companies’ broken promises (Sony’s pledge of Java on the PlayStation 2, the Java-based Phantom gaming console), etc.

The obvious trait of the Whiny Little Bitch Contingent is their sense of entitlement: companies like Apple owe us stuff. The more subtle trait is their ignorance of the basic principle that people, organizations, and companies largely operate in their own self-interest. Apple got interested in Java when it seemed like a promising way to write Mac apps (or, a promising way to get developers to write Mac apps). When that failed, they had understandably little interest in providing developers a means of writing apps for other platforms. I’m sure I’m not the only person to write a Java webapp on the Mac that I knew my employer would block Mac clients from actually using. By 2008, when Apple entered the mobile market with the iPhone, there was nothing about supporting Java that would appeal to Apple’s self-interest, outside of a small number of hardware sales to Java developers.

That’s what defines the WLBC to me: sense of entitlement, and an igorance of other parties’ self-interest (which leads to an expectation of charity and thus the sense of entitlement).

So, yesterday, Apple holds an event to roll out their whole big deal with Textbooks on the iPad. They look pretty, they’ve got an economic model that may make some sense for publishers (i.e., it may be in the publishers’ self-interest), etc. Also, there’s a tool for creating textbooks in Apple’s format.

And this is where the Whiny Little Bitch Contingent goes ape-shit. Because there’s a clause in the iBooks Author EULA that says if you’re going to charge for your books, you can only publish to Apple’s iBookstore.

So, let’s back up a second. The only point of this software is to feed Apple’s content chain. The only reason it is being offered, free, is to lure authors and publishers to use Apple’s stuff… which in turn sells more iPads and gives Apple a 30% cut. If you are not going to put stuff on Apple’s store, why do you even care about this? Hell, I don’t develop for Microsoft’s platforms, so if they see the need to turn Visual Studio into an adventure game… hey that’s their problem.

If you’re not authoring for Apple’s iBookstore, why do you even care what iBooks Author does, or what’s in its EULA?

In decrying the “cold cynicism” of Apple’s iBook EULA, Marshall Kirkpatrick writes:

It’s hard to wrap my brain around the cold cynicism of Apple’s releasing a new tool to democratize the publishing of eBooks today, only to include in the tool’s terms and conditions a prohibition against selling those books anywhere but through Apple’s own bookstore

“Democratize the publishing of eBooks”? Where the hell did he get that? Maybe he watched the video and fell for the grandiosity and puffery… I never actually watch these Apple dog-and-pony shows anymore, as following the Twitter discussion seems to give me the info I need. But thinking that Apple is in the business of democratizing anything is nuts: they’re in the business of selling stuff, and the only reason they’d give out a free tool is to get you to help them sell more of that stuff.

I didn’t download iBooks Author, even though you’d expect an Apple-skewing author like me to be one of the first onboard. Frankly, I’m pretty tired of writing, as the last two books have been difficult experiences, and the thought of starting another book, even with a 70% royalty instead of 5%, is not that appealing. A year ago I thought about self-publishing a book on AV Foundation, but right now I lack the will (also, I’ve failed to fall in love with AV Foundation, and blanch at its presumptions, limitations, and lack of extensibility… I much prefer the wild and wooly QuickTime or Core Audio).

So, if we’re going to talk about iBooks Author, let me know how it holds up for long documents: if it’s pretty on page 1, is it still usable when you’re 200 pages in? Does it offer useful tools for managing huge numbers of assets? Does it provide its own revision system and change tracking, or does it at least play nicely with Subversion and Git? Can it be used in a collaborative environment? These are interesting questions, at least to people who plan to use the tool to publish books on the iBookstore.

But if Apple’s not giving you a pretty, free tool you can use to write .mobi files that Amazon can sell Kindles with? Sorry, Whiny Little Bitch Contingent, I’ve got zero sympathy for you there. Call it a third party opportunity. Or just put on your big boy underwear and do it yourself.

Take it, Geddy:

You don’t get something for nothing
You can’t have freedom for free
You won’t get wise
With the sleep still in your eyes
No matter what your dreams might be

Five Years, That’s All We’ve Got

As mentioned before, I’m not a big fan of panels at developer conferences, and whenever I’m in one, I deliberately look for points of contention or disagreement, so that we don’t end up as a bunch of nodding heads who all toe a party line. Still, at last week’s CocoaConf in Raleigh, I may have outdone myself.

When an iOS developer in the audience asked if he should get into Mac development, most of the panel said “go for it”, but I said “don’t bother: the Mac is going to be gone in 5-10 years. And what’s going to kill it is iOS.”

This is wrong, but not for the reason I’ll initally be accused of.

First, am I overstating it? Not in the least. Just looking at things where they stand today, and the general trends in computing, I can’t see the Mac disappearing in less than five years, but I also can’t imagine it prevailing more than another 10.

This isn’t the first time I’ve put an unpopular prediction out there: in 2005, back when I was with O’Reilly and right after the Intel transition announcement, I I predicted that Mac OS X 10.6 would come out in 2010 and be Intel-only. This was called “questionable”, “dumb”, and “ridiculous advice”. Readers said I had fooled myself, claimed I was recommending never upgrading because something better is always coming (ie, a straw man argument), argued Intel wouldn’t be that big a deal, and predicted that PPC machines would still be at least as common as Intel when 10.6 came out. For the record, 10.6 came out in August, 2009 and was indeed Intel-only. Also, Intel Macs had already displaced PowerPC in the installed base by 2008.

So, before you tell me I’m an idiot, keep in mind that I’ve been told that lots of times before.

Still, punditry has been in the “Mac is doomed” prediction business for 20 years now… so why am I getting on board now?

Well, the fact that I’m writing this blog on my iPad is a start. And the fact that I never take my MacBook when I travel anymore, just the iPad. And the issue that Lion sucks and is ruining so much of what’s valuable about the Mac. But that’s all subjective. Let’s look at facts and trends.

iOS Devices are Massively Outselling Mac OS X

One easy place to go for numbers is Apple’s latest financial reports. For the quarter that ended Sept. 30, 2011 (4Q by Apple’s financial calendar), the company sold 4.89 million Macs, 11.12 million iPads, and 17.07 million iPhones. That means the iPad is outselling all models of Mac by a factor of more than 2 to 1.

The numbers also mean that iOS devices are outselling Mac OS X devices by a ratio of at least 5.7 to 1… and it must be much more, since Apple also sold 6.62 million iPods (since these aren’t broken down by model, we don’t know which are iOS devices and which aren’t).

While the Mac is growing, iOS is growing much faster, so this gap is only going to continue to grow.

Goin’ Mobile

Let’s also think about just which Macs are selling. The answer is obvious: laptops. The Mac Unit Sales chart in this April 2011 MacWorld article shows Apple laptops outselling desktops by a factor of nearly 3-to-1 for the last few quarters. Things are so cold for desktops that there is open speculation about whether the Mac Pro will ever be updated, or if it is destined to go the way of the Xserve.

Now consider: Apple has a whole OS built around mobility, and it’s not Mac OS X. If the market is stating a clear preference for mobile devices, iOS suits that better than the Mac, and the number of buyers who prefer a traditional desktop (and thus the traditional desktop OS) are dwindling.

Replace That Laptop!

Despite the fact that it’s only about 28% of Apple laptop sales, I would argue the definitive Mac laptop at this point is the MacBook Air. It’s the most affordable MacBook, and has gotten more promotion this year than any other Mac (when was the last time you saw an iMac ad?). It also epitomizes Apple’s focus on thinness, lightness, and long battery life.

But on any of these points, does it come out ahead of an iPad 2? It does not. And it costs twice as much. The primary technical advantage of the Air over the iPad 2 are CPU power, RAM, and storage.

Is there $500 worth of win in having bigger SSD options or a faster CPU?

Few People Need Trucks…

“But”, you’re saying, “I can do real work on a Mac”. Definitely true… but how much of it can you really not do on an iPad? For last month’s Voices That Matter conference, I wrote slides for two talks while on the road, using Keynote, OmniGraffle, and Textastic… the same as I would have used on my Mac Pro, except for using Xcode to work with code instead of Textastic. I also delivered the talk via Keynote off the iPad with the VGA cable, and ran the demos via the default mirroring over the cable.

I’ve gone three straight trips without the laptop and really haven’t missed it. And I’m not the only one. Technologizer’s Harry McCracken posted a piece the other week on How the iPad 2 Became My Favorite Computer.

Personally, the only thing that I think I’m really missing on my iPad is Xcode, so that I could develop on the road. I suspect we’ll actually see an Xcode for iPad someday… the Xcode 4 UI changes and its single-window model made the app much more compatible with iPad UI conventions (the hide-and-show panes could become popovers, for example). And lest you argue the iPad isn’t up to the challenge, keep in mind that when Xcode 1.0 was first released in Fall, 2003, a top-of-the-line Power Mac G5 had a dual-core 2.0GHz CPU and 512 MB RAM, whereas the current iPad 2 has a dual-core A5 running at 1.0GHz and 512 MB RAM, meaning iPad is already in the ballpark.

…and Nobody Needs a Bad Truck

“But Mac OS X is a real OS,” someone argues. Sure, but two things:

  • How much does that matter – a “real OS” means what? Access to a file system instead of having everything adjudicated by apps? I’d say that’s a defining difference, maybe the most important one. But it matters a lot less than we might think. Most files are only used in the context of one application, and many applications make their persistence mechanism opaque. If your mail was stored in flat files or a database, how would you know? How often do you really need or want the Finder? With a few exceptions, the idea of apps as the focus of our attention has worked quite well.

    What exceptions? Well, think of tasks where you need to combine information from multiple sources. Building a web page involves managing HTML, CSS, JavaScript, and graphic files: seemingly a good job for a desktop OS and bad job for iOS. But maybe it argues for a task-specific app: there’s one thing you want to do (build a web page), and the app can manage all the needed files.

  • For how long will Mac OS X be a “real OS”? – Anyone who follows me on Twitter knows I don’t like Lion. And much of what I don’t like about it is the ham-handed way iOS concepts have been shoehorned into the Mac, making for a good marketing message but a lousy user experience. Some of it is just misguided, like the impractical and forgettable LaunchPad.

    But there’s a lot of concern about the Mac App Store, and the limitations being put on apps sold through the MAS. This starts with sandboxing, which makes it difficult or impossible for applications to damage one another or the system as a whole. As Andy Ihnatko pointed out, this utterly emasculates AppleScript and Automator. Daniel Steinberg, in his “Mac for iOS Programmers” talk at CocoaConf, also wondered aloud if inter-application communication via the NSDistributedNotificationCenter will be the next thing to go. And plenty of developers fear that in time, Apple will prohibit third-party software installs outside of the MAS.

Steve Jobs once made a useful analogy that many people need cars and only a few need trucks, implying that the traditional PC as we’ve known it is a “truck”, useful to those with advanced or specific needs. And that would be fine… if they were willing to let the Mac continue to be the best truck. But instead, the creep of inappropriate iPad-isms and the iOS-like limitations being put on Mac apps are encroaching the advanced abilities that make the truck worth owning in the first place.

The Weight of the Evidence

Summarizing these arguments:

  • iOS devices are far outselling Mac OS X, and the gulf is growing
  • The iPad and the MacBook (the only Mac that matters) are converging on the same place on the product diagram: an ultra-light portable computing device with long battery life
  • iOS is already capable of doing nearly anything that Mac OS X can do, and keeps getting better.
  • Mac OS X shows signs of becoming less capable, through deliberate crippling of applications by the OS.

Taken together, the trends seem to me like they argue against the future of Mac OS X. Of the things that matter to most people, iOS generally does them better, and the few things the Mac does better seem like they’re being actively subverted.

Some people say the two platforms will merge. That’s certainly an interesting possibility. Imagine, say, an iOS laptop that’s just an iPad in a clamshell with a hardware keyboard, likely still cheaper than the Air, and the case for the MacBook gets weaker still.

Given all this, I think OS X becomes less necessary to Apple — and Apple users — with each passing year. When does it reach a point where OS X doesn’t make sense to Apple anymore? That’s what I’m mentally pencilling in: 5-10 years, maybe after one or two more releases of OS X.


But in the beginning, I said I was wrong to say that an iOS developer shouldn’t get into Mac programming because the Mac is doomed. And here’s why that’s wrong: you shouldn’t let popularity determine what you choose to work with. Chasing a platform or a language just because it’s popular is always a sucker’s bet. For one thing, times change, and the new hotness becomes old news quickly. Imagine jumping into Go, which in 2009 grew fast enough to become the “language of the year” on the TIOBE programming language index. It’s currently in 34th, just behind Prolog, ahead of Visual Basic .NET, and well behind FORTRAN (seriously!).

Moreover, making yourself like something just because it’s popular doesn’t work. As Desktop Java faded into irrelevance, I studied C++ and Flash as possible areas of focus, and found that I really didn’t like either of them. Only when the iPhone OS came along did I find a platform with ideas that appealed to me.

Similarly, I greatly value the time I spent years ago studying Jini, Sun’s mis-marketed and overblown self-networking technology. Even though few applications were ever shipped with it — Jini made the fatal mistake of assuming a Java ubiquity that did not actually exist outside of Sun’s labs — the ideas it represented were profound. Up to that point, I had never seen an API that presented such a realistic view of networking, one that kept in mind the eight fallacies of distributed computing and made them managable, largely by treating failure as a normal state, rather than an exceptional one. In terms of thinking about hard problems and how stuff works (or doesn’t work) in the real world, learning about Jini was hugely valuable to me at the time I encountered it.

And had I known Jini was doomed, would I have been better off studying something more popular, like Java Message Service (which my colleagues preferred, since it “guaranteed” message delivery rather than making developers think about failure)? I don’t think so. And so, even if I don’t think the Mac has a particularly bright future, that’s no reason for me to throw cold water on someone’s interest in learning about the platform. There are more than 20 years of really good ideas in the Mac, and if some of them enlighten and empower you, why not go for it?

Secret APIs

Discussing Apple’s Java deprecation, Java creator James Gosling blogged about the background of Java on the Mac, saying “the biggest obstacle was their use of secret APIs. Yes, OS X has piles of secret APIs. Just like the ones that Microsoft had that contributed to their antitrust problems.”

In a recent Q&A at Google, available on YouTube, he elaborates further, around 43 minutes in (embedded YouTube clip will take you right there, otherwise read the blockquote):


At Sun, we had worked with them to try to take it over. But there were all kinds of issues, and it was mostly things like, you know, to integrate properly into the Mac OS, there were a bunch of secret APIs. And in their integration, there were all these secret APIs, and they wouldn’t tell us what they were, we just knew they were there. And then, you know, it’s sort of like half their brain wanted to give us the code, half their brain is like “no no no no no, we can’t”. So, nyah, that was all kind of spastic.

The fact that Dr. Gosling brings up “secret APIs” repeatedly when talking about the subject makes me think that he really wants to make this point that Apple’s use of secret APIs and its intransigence has been a major problem for Java on the Mac.

But… is it true? How big a deal are secret APIs in OSX and iOS anyways?

Nobody denies that there are undocumented and otherwise secret APIs throughout both OSX and iOS. They are easily found through techniques such as reverse-engineering and method swizzling. On OSX, they can be called, provided you can figure out their proper usage without documentation. Technically, this is also possible on iOS, although use of non-public APIs will get your app rejected by the App Store, so it’s largely pointless.

The benign explanation for secret APIs is that they’re used internally but haven’t been fully vetted for use by third-parties. We’ve all written code we’re not proud of and wouldn’t want others calling, or at least written utility functions and methods that were only thought through for certain uses and aren’t known to be appropriate for general use. An interesting example is iOS’ UIGetScreenImage function. As a devforums thread indicates, Apple started allowing use of this private API in 2009 because there wasn’t a good public alternative, with the proviso that its use would be disallowed once a suitable public API was released. This occurred with the arrival of AV Foundation in iOS 4.0, and direct calls to UIGetScreenImage are again grounds for App Store rejection.

Aside from technical grounds, another reason for secret APIs is legal entanglements. There was an example of this in one of my earliest blogs: Apple licensed AAC encoding for OS X and for its own apps on Windows (iTunes, QuickTime Player), but not for third-party apps on Windows. According to Technical Q&A QA1347, a developer who wanted to provide this functionality on Windows would need to license the AMR encoding separately from VoiceAge, then provide proof of that license to Apple in order to get an SDK that would allow their code to make the secret call into QuickTime’s encoder.

But what can we say about Dr. Gosling’s complaints about secret APIs and Java? Certainly it plays well to the passions and politics of the Java community, but I’m not yet convinced. We know that most of Java actually ports to the Mac pretty easily: Landon Fuller’s “Soy Latte” project ported JDK 6 to the Mac in just a few person-weekends, and was later incorporated into OpenJDK’s BSD Ports subproject. But that left out some hard parts with intense native entanglements: sound, and the UI (Soy Latte, like most vanilla Java ports, relies on X11). Gosling acknowledges this in his blog, saying of these secret APIs that “the big area (that I’m aware of) where these are used is in graphics rendering.”

However, does this seriously mean that porting the Java graphics layer — Java2D, AWT, and Swing — is impractical or impossible without access to these secret APIs? It can’t be. After all, SWT exists for Mac as well, as a third-party creation, and it does the same things as these missing pieces of OpenJDK. In fact, SWT is more tightly-coupled to native code, as its whole approach is to bind Java objects to native peers (originally in Carbon, later in Cocoa), while Swing is all about avoiding native entanglements and instead painting look-alike widgets. Furthermore, I think Java’s rendering pipeline was switched over to an OpenGL implementation a while back, and that’s a public API that exists on OSX. So this kind of begs the question: what does Java need that isn’t provided by a public API? It doesn’t seem like graphics can be the problem.

The conspiracy theorists could argue that Apple has its own APIs that are more performant than the public APIs. Maybe, but what would be the point? Microsoft was roundly criticized for this in the 90’s, but Microsoft had more cases where their own products competed directly with third parties, and therefore could have incentive for their OS team to give a secret hand to the applications team. With Apple, software is their second-smallest revenue segment, and there are fewer cases where the company competes directly with a third-party rival (though there are clearly cases of this, such as Final Cut versus Premiere). Often, Apple’s software serves a strategic role – iLife may be more useful for selling Macs than for selling itself on DVD to existing Mac owners. So sure, Apple could be using secret APIs to give itself a leg up on competitors, but it’s hard to see how that would really be in their self-interest.

Having said all this, I’m still thwarted by a private API I needed this Summer: the “suck into a point” animation isn’t exposed by a Cocoa API on OSX, and asking for help on cocoa-unbound didn’t turn up an answer. Apparently, it’s possible on iOS, but via an undocumented method. Why this isn’t public on OSX or iOS, I can’t imagine, particularly given that Apple’s apps have made it a fairly standard behavior, meaning users will expect it when you use the round close button on a free-floating view. Oversight? Not ready for public consumption? Apple just being dicks? Who knows!

Of course, that brings up the last point about secret APIs. At the end of the day, they’re almost always conveniences. If something is possible at all, you could probably just do it yourself. I don’t know exactly what transforms are involved in the suck-to-close animation, but it’s surely possible to create a reasonably close approximation with Core Animation. Similarly, instead of calling QuickTime’s secret AAC encoder on Windows, you could license some other library or framework, or write your own. It might not be easy or practical, but if Apple can move the bits in some specific way, it must at least be possible for a third-party to do the same.

It’s like “Glee” with coding instead of singing

Like a lot of old programmers — “when I was your age, we used teletypes, and line numbers, and couldn’t rely on the backspace key” and so on — I sometimes wonder how different it is growing up as a young computer programmer today. Back in the 80’s we had BBSs, but no public internet… a smattering of computer books, but no O’Reilly… and computer science as an academic discipline, but further removed from what you’d actually do with what you’d learned.

Developers my age grew up on some kind of included programming environment. Prior to the Mac, every computer came with some kind of BASIC, none of which had much to do with each other beyond PRINT, GOTO, and maybe GOSUB. After about the mid-80’s, programming became more specialized, and “real” developers would get software development kits to write “real” applications, usually in some variant of C or another curly-brace language (C++, C#, Java, etc.).

But it’s not like most people start with the formal tools and the hard stuff, right? In the 80’s and 90’s, there were clearly a lot of young people who picked up programming by way of HyperCard and other scripting environments. But those have largely disappeared too.

So what do young people use? When I was editing for O’Reilly’s ONJava website, our annual poll of readers revealed that our under-18 readership was effectively zero, which meant that young people either weren’t reading our site, or weren’t programming in Java. There has to be some Java programming going on at that age — it is the language for the Advanced Placement curriculum in American high schools, after all — but there’s not a lot of other evidence of widespread Java coding by the pre-collegiate set.

I’ve long assumed that where young people really get their start today is in the most interesting and most complete programming environment provided on every desktop computer: the web browser. I don’t want to come off like a JavaScript fanboy — my feelings about it are deeply mixed — but the fact remains that it is freely and widely available, and delivers interesting results quickly. Whereas 80’s kids would write little graphics programs in Applesoft BASIC or the obligatory 10 PRINT "CHRIS IS GREAT" 20 GOTO 10, these same kinds of early programming experiences are probably now being performed with the <canvas> tag and Document.write(), respectively. In fact, the formal division of DOM, CSS, and JavaScript may lead the young programmer to a model-view-controller mindset a lot sooner than was practical in your local flavor of BASIC.

The other difference today is that developers are much better connected, thanks to the internet. We didn’t used to have that, so the programmers you knew were generally the ones you went to school with. I was lucky in this respect in that the guys in the class above me were a) super smart, and b) very willing to share. So, 25 years later, this will have to do as a belated thank you to Jeff Dauber, Dean Drako, Drew Shell, Ed Anderson, Jeff Sorenson, and the rest of the team.

Did I say “team”? Yeah, this is the other thing we used to do. We had a formal computer club as an activity, and we participated in two forms of programming contests. The first is the American Computer Science League — which I’m releived to see still exists — which coordinated a nation-wide high school computer science discovery and competition program, based on written exams and proctored programming contests. The cirriculum has surely changed, but at least in the 80’s, it was heavily math-based, and required us to learn non-obvious topics like LISP programming and hexadecimal arithmetic, both of which served me well later on.

Our school also participated in a monthly series of programming contests with other schools in the suburban Detroit area. Basically it worked like this: each team would bring one Apple II and four team members and be assigned to a classroom. At the start of the competition, each team would be given 2-4 programming assignments, with some sample data and correct output. We’d then be on the clock to figure out the problems and write up programs, which would then be submitted on floppy to the teachers running the contest. Each finished program scored 100 points, minus 10 points for every submission that failed with the secret test data, and minus 1 point for every 10 minutes that elapsed.

I have no idea if young people still do this kind of thing, but it was awesome. It was social, it was practical, it was competitive… and it ended with pizza from Hungry Howie’s, so that’s always a win.

Maybe we don’t need these kinds of experiences for young programmers today. Maybe a contrived contest is irrelevant when a young person can compete with the rest of the world by writing an app and putting it on the App Store, or by putting up a web page with all manner of JavaScript trickery and bling. Still, it’s a danger to get too tied to the concretes of today, the specifics of CSS animations and App Store code-signing misery. Early academic exercises like earning to count in hex, even if it’s to score points on a quiz, will likely pay off later.


I’m late to the party blogging about Apple deprecating its Java for Mac OS X, with particularly good posts up from Matt Drance and Lachlan O’Dea. And I’ve already posted a lot of counterpoints on the Java Posse’s Google group (see here, here, and here). So let me lead with the latest: there’s now a petition calling on Apple to contribute its Mac Java sources to OpenJDK. This won’t work, for at least four reasons I can think of:

  • Petitions never work.
  • Apple doesn’t listen.
  • Apple is a commercial licensee of Java. The terms in their contract with Sun/Oracle almost certainly prohibit open-sourcing their Sun-derived code.
  • Even if it could be open-sourced, Apple’s code was developed for years with the assumption it was proprietary code. The company would want nothing less than an absolutely forensic code analysis to ensure there are no loopholes, stray imports or links, or anything else that a creative FSF lawyer could use to claim that Mac OS X links against the GPL’ed OpenJDK and must therefore itself be GPL’ed. Apple has nothing to earn and everything to lose by open-sourcing its JDK, so don’t hold your breath.

It’s also frustratingly typical that many of the signatories of this petition have spent the last week blogging, tweeting, podcasting, and posting that they will never buy another Apple product (and they’re gonna go tell all their friends and relations that Apple sucks now, just for good measure). Here’s the thing: companies generally try to do right by their customers. When you assert that you will never be their customer again, you remove any reason the company would have to listen to your opinion. Really, this much should be common sense, right?

Buried in all the denunciations of “control freak Steve Jobs” and his nefarious skullduggery is a wake-up call that Oracle and the Java community need to hear: one of your biggest commercial licensees, the second biggest US corporation by market cap, doesn’t think licensing Java will help them sell computers anymore. Why does nobody take this screamingly obvious hint?

I imagine part of the reason that the activists want Apple to contribute its code instead of letting it go to waste is that they’ve realized what a tall order it would be for the community to attempt a port on its own. Java is huge, and its native dependencies are terribly intractable: even the vaunted Linux community couldn’t get Blackdown Java up to snuff, and Sun came to the rescue with an official version for Linux. How likely is it that we’ll find enough people who know Java and Mac programming well enough — and who care to contribute their time — to do all this work for free? It may well be a non-starter. Landon Fuller got the headless bits of JDK 6 ported to OS X ported in a few weeks in the form of the Soy Latte project, but had to settle for an X11-based UI, and his announcement asked for help with Core Audio sound support and OS X integration in general… and I don’t believe any help ever materialized.

Aside: when volunteers aren’t enough, the next step is to get out the checkbook and call in mercenaries. I looked at javax.sound yesterday and estimated it would take me 4-6 weeks, full-time, to do a production-quailty port using Core Audio. I’ll bid the project out at $20,000. Sign a contract and I’ll contribute the sources to any project you like (OpenJDK, Harmony, whatever). You know where to reach me. And I’m not holding my breath.

The real problem with porting Java is that Java’s desktop packages – AWT, Swing, javax.sound, etc. – are very much a white elephant, one which perfectly fits Wikipedia’s definition:

A white elephant is an idiom for a valuable possession of which its owner cannot dispose and whose cost (particularly cost of upkeep) is out of proportion to its usefulness or worth.

As I’ve established, Java’s desktop packages are egregiously expensive. In fact, with Apple’s exit, it’s not clear that there’s anybody other than Oracle delivering a non-X11 AWT/Swing implementation for any platform: it’s just too much cost and not enough value. End-user Desktop Java applications are rare and get rarer every day, displaced largely by browser-based webapps, but also by Flash and native apps.

We know the big use for AWT and Swing, and it’s a terrible irony: measured by app launches or time spent in an app, the top AWT/Swing apps are surely NetBeans and IntelliJ, IDEs used for creating… other Java applications! The same can be said of the SWT toolkit, which powers the Eclipse IDE and not much else. This is what makes this white elephant so difficult to dispose of: all the value of modern-day Java is writing for the server, but nearly all the Java developers are using the desktop stuff to do so, making them the target market (and really the only market) for Desktop Java. If there were other viable Java applications on the desktop, used by everyday end-users, Apple couldn’t afford to risk going without Java. There aren’t, and it can.

There’s lots of blame to go around, not the least of which should be directed at Sun for abandoning desktop Java right after Swing 1.0 came out. It’s embarrassing that my five-year-old Swing book is more up-to-date with its technology than my year-old-iPhone book is, but it illustrates the fact that Apple has continued to evolve iOS, while Sun punted on client-side Java for years, and then compounded the problem by pushing aside the few remaining Swing developers in favor of the hare-brained JavaFX fiasco. Small wonder that nearly all the prominent desktop Java people I know are now at Google, working on Android.

Other languages are easier to port and maintain because Ruby, Python, and the like don’t try to port their own entire desktop API with them from platform to platform. Again, the irony is that this stuff is so expensive in Java, and little related to the server-side work where Java provides nearly all of its value. I’m not the first to say it, but Oracle would do itself a huge favor by finding a way to decouple all this stuff from the parts of Java that actually get used, likely as a result of Project Jigsaw. Imagine something like this:

Splitting desktop stuff out of Java SE

In this hypothetical arrangement, the desktop packages are migrated out of Java SE, which now contains the baseline contents of Java that everybody uses: collections, I/O, language utilities, etc. These are the parts that EE actually uses, and that dependency stays in place. I’ve colored those boxes green to indicate that they don’t require native widgets to be coded. What does require that is the new Java “Desktop Edition” (“Java DE”), which is SE plus AWT, Swing, Java2D, javax.sound, etc. For the rare case where you need UI stuff on the server, like image manipulation as a web service, combine EE and DE to create “Java OE”, the “Omnibus edition”. Also hanging off SE are ME (which is SE minus some features, and with a new micro UI), as well as non-Sun/Oracle products that derive from SE today, such as Eclipse’s quasi-platform, and Android. Of course, it’s easy to play armchair architect: the devil is in the details, and Mark Reinhold mentioned on Java Posse 325 that there are some grievously difficult dependencies created by the unfortunate java.beans package. Still, when the cost of the desktop packages is so high, and their value so low, they almost certainly need to be moved off the critical path of Java’s evolution somehow. The most important thing Oracle needs to provide in Java 7 or 8 is an exit strategy from desktop Java.

Instead, we’ll continue to hear about what an utter rat-bastard Steve Jobs is, and how the deprecation of Apple’s Java is part of a scheme to “force developers to use Objective-C”, as if there were a flood of useful and popular Java applications being shoved off the Mac platform. Many of the ranters insist on dredging up Jobs’ vow to “make the Mac the best Java platform”, in determined ignorance of the fact that this statement was made over ten years ago, at JavaOne 2000. For context: when Jobs was on the JavaOne stage, the US President was Bill Clinton, and Sun was #150 on the Fortune 500, ahead of Oracle and Apple (and Google, which didn’t even enter the list until 2005). If TV’s Teen Titans taught us anything, it’s that Things Change.

And for a while, maybe the Mac was briefly the best Java platform: unlike Windows (legally enjoined from shipping Java by their attempts to subvert it) and Linux (where many distros turned up their noses at un-free Java for years), Apple shipped Java as a core part of the OS for years. Not an option: it was installed as part of the system and could not practically be removed. Apple’s Mac look-and-feel for Swing was also widely praised. Do you know who said the following?

I use the MAC because it’s a great platform. One of the nice things about developing in Java on the Mac is that you get to develop on a lovely machine, but you don’t cut yourself off from deploying on other platforms. It’s a fast and easy platform to develop on. Rock solid. I never reboot my machine… Really! Opening and closing the lid on a Powerbook actually works. The machine is up and running instantly when you open it up. No viruses. Great UI. All the Java tools work here: NetBeans and JEdit are the ones I use most. I tend to think of OSX and Linux with QA and Taste.

It was Java creator James Gosling, in a 2003 blog entry. As you might imagine, Apple dumping Java seven years later has him singing a different tune.

A lot of us who’ve developed Java on the Mac have expected this for years, and this week’s reactions are all a lot of sound and fury, signifying nothing. The Java crowd will keep doing what it does — making web sites — but they’ll have to make a choice: either use Windows or Linux for development, or get away from the IDEs and write their code with plain text editors and the command line on the Mac. If I were still doing Java, I probably would barely notice, as I wrote nearly all my Java code with emacs after about 2000 or so, and never got hooked on NetBeans or Eclipse. But with the most appealing desktops and the most popular mobile devices ditching Java, it’s time for the Java community to face up to the fact that Desktop Java is a ruinously expensive legacy that they need to do something about.

All the angry screeds against Steve Jobs won’t change the fact that this is the “ball and chain” that’s pulling Java below the waves.

More things in Heaven and Earth

There are more things in Heaven and Earth, Horatio; than are dreamt of in your philosophy.
Hamlet, Act 1, Scene V

There’s suddenly a lot of conventional wisdom that says the rise and eventual dominance of Android is manifest, and inevitable. Some of these claims make dubious analogies to Windows’ defeat of the Mac in the 90’s, ramming square pegs through round holes to make the analogy stick (to wit: who are the hardware manufacturers this time, the handset makers or the carriers). It may indeed come to pass, but the reasoning behind these claims is pretty shallow thusfar.

Case in point: an Appcelerator survey covered in The Apple Blog story Devs Say Android is Future-Proof. iOS? Not So Much. The reasoning for Android’s perceived advantage? This article doesn’t mention Android’s license terms and widespread hardware adoption (maybe that’s taken for granted at this point?), and instead mentions only the appeal of writing apps for GoogleTV, a product that is not even out yet (meaning Adamson’s First Law applies), to say nothing of how many purported “interactive television revolutions” we’ve suffered through over the decades (Qube, videotex, WebTV, Tru2Way, etc.). Maybe it’ll be the next big thing, but history argues otherwise.

In the 90’s, the rise of Java seemed an obvious bet. Applets would make web pages far more compelling than static pages and lengthy form submits, and application developers would surely be better off with garbage collection and strong typing than with C and C++. Java was so sure to be big, that Microsoft threw the full force of its dirty tricks machine at it, while Apple exposed most of the Mac’s unique libraries to Java bindings (including, at various times, QuickTime, Cocoa, Core Audio, speech, and more). But it didn’t work out that way: Java on the browser was displaced by JavaScript/Ajax, and the early attempts to write major desktop applications in Java were unmitigated disasters, with the Netscape Navigator port abandoned, and Corel’s Java version of Word Perfect Office was buried almost immediately after it was released. 1996’s sure bet was a has-been (or a never-was) by 2001.

If you think about it, the same thing happened a few years ago with AIR. With the YouTube-powered rise of Flash, AIR seemed a perfect vehicle to bring hordes of Flash developers to the desktop. Everyone knew it would be big. Except it wasn’t. AIR applications are rare today, perhaps rarer even than Java. Admittedly, I only remembered of AIR’s existence because I needed to download the AIR-powered Balsamiq application for a client this week… exception that proves the rule, I guess?

My point in all this is that the conventional wisdom about platform success has a tendency to be selective in considering what factors will make or break a platform. Licensing, corporate support, community, and of course the underlying technology all play a part. Android is greatly enhanced by the fact that Google puts talented people behind it and then gives it away, but if carriers then use it to promote their own applications and crapware over third-party apps (or cripple them, as they did with JavaME), then Android’s advantage is nil. On the other hand, Apple’s iOS may have remarkable technology, but if their model requires using their corporate strength to force carriers to be dumb pipes, then they may only be able to get iPhone on weaker carriers, which will turn off consumers and retard growth of the platform.

Ultimately, it’s hard to say how this will all play out, but assuming an Android victory based on the presumed success of currently non-existent tablets and set top boxes is surely an act of faith… which probably accounts for all the evangelism.

So why am I on iOS now? Is it because I have some reason to think that it will “win”? Not at all. Mostly it’s because I like the technology. In the mid 2000’s, when user-facing Java was in terminal decline, I tried to learn Flash and Flex to give myself more options, but I just couldn’t bring myself to like it. It just didn’t click for me. But as I got into Cocoa and then the iPhone SDK, I found I liked the design patterns, and the thoughtfulness of all of it. The elegance and power appealed to me. Being a media guy, I also appreciate the platform’s extraordinary support for audio and video: iOS 4 has three major media APIs (AV Foundation, Core Audio, and Media Player), along with other points of interest throughout the stack (video out in UIKit, the low-level abstractions of Core Media, spatialized sound in OpenAL, high-performance DSP functions in the Accelerate framework, etc.). The package is quite limited by comparison, offering some canned functionality for media playback and a few other curious features (face recogniation and dial tone generation, for example), but no way to go deeper. When so many media apps for Android are actually server-dependent, like speech-to-text apps that upload audio files for conversion, it says to me there’s not much of a there there, at least for the things I find interesting.

Even when I switched from journalism and failed screenwriting to programming and book-writing in the late 90’s, at the peak of the Microsoft era, I never considered for a second the option of learning Windows programming and adopting that platform. I just didn’t like their stuff, and still don’t. The point being that I, and you, don’t have to chase the market leader all the time. Go with what you like, where you’ll be the most productive and do the most interesting work.

There’s a bit in William Goldman’s Adventures in the Screen Trade (just looked in my copy, but couldn’t find the exact quote), where the famous screenwriter excuses himself from a story meeting, quitting the project by saying “Look, I am too old, and too rich, to have to put up with this shit.” I like the spirit of that. Personally, I may not be rich, but I’m certainly past the point where I’m willing to put up with someone else’s trite wisdom, or the voice of the developer mob, telling me where I should focus my skills and talents.

Jimmy Gosling Said (I’m Irrelevant When You Compile)

Much renewed hope and delight last week, after Apple pulled back from its most audacious and appalling land-grabs in its iOS developer agreement, notably the revised section 3.3.1 that prohibited any languages but C, Objective-C, and C++ for iOS development (Daring Fireball quotes the important changes in full). Whether this is the result of magnanimity or regulatory pressures in the U.S. and Europe is unknowable and therefore unhelpful. What’s interesting is thinking about what we can do with our slightly-loosened shackles.

For example, it would be interesting to see if someone, perhaps a young man or lady with a mind for mischief or irony, could bring Google’s Go programming language to iOS development. Since the Go SDK runs on Mac and can compile for ARM, it might well be possible to have a build script call the Go compiler as needed. And of all the hot young languages, Go might be the most immediately applicable, as it is compiled, rather than interpreted.

And that brings up the other major change, the use of interpreters. Nobody seems to be noting that the change in section 3.3.2 is not just a loosening of this Spring’s anti-Flash campaign, but is in fact far more lenient than this policy has ever been. Since the public SDK came out in 2008, all forms of interpreted code have been forbidden. This is what dashed early plans to bring Java to the iPhone as an application runner, despite its absence as an applet runner in Safari. As Matt Drance has pointed out, the new policy reflects the reality on the ground that interpreters (especially Lua) have been tolerated for some time in games. The new phrasing forbids downloading of executable content, but allows for cases where the interpreter and all scripts are included in the app bundle. This has never been allowed before, and is a big deal.

Now let me stretch the definition of “interpreter” a bit, to the point where it includes virtual machines. After all, the line between the two is hard to define: a “virtual machine” is a design philosophy, not a technical trait. A VM uses an interpreter (often a byte code interpreter rather than source, but not necessarily), and presumably has more state and exposes more library APIs. But languages and their interpreters are getting bigger – Ruby I/O is in the language rather than in a library (like C or Java), but that doesn’t make Ruby a VM, does it?

You might have surmised where I’m going with this: I don’t think the revised section 3.3.2 bans a hypothetical port of the Flash or Java VMs to iOS anymore, if they’re in a bundle with the .swf or .jar files that they will execute.

I could be wrong, particularly given Steve Jobs’ stated contempt for these sorts of intermediary platforms. But if a .swf-and-Flash-VM bundle were rejected today, it would be by fiat, and not by the letter of section 3.3.2.

Whether any of this matters depends on whether anyone has stand-alone Flash applications (such as AIR apps) or Java applications that have value outside of a browser context, and are worth bringing to a mobile platform.


I can’t say why AIR never seemed to live up to its billing, but the failings of Desktop Java can in part be blamed on massive neglect by Sun, exacerbated by internecine developer skirmishes. Swing, the over-arching Java UI toolkit, was plagued by problems of complexity and performance when it was introduced in the late 90’s, problems that were never addressed. It’s nigh impossible to identify any meaningful changes to the API following its inclusion in Java 1.2 in 1998. Meanwhile, the IBM-funded Eclipse foundation tied their SWT more tightly to native widgets, but it was no more successful than Swing, at least in terms of producing meaningful apps. Each standard powers one IDE, one music-stealing client, and precious little else.

So, aside from the debatability of section 3.3.2, and wounded egos in the Flash and Java camps, the biggest impediment to using a “code plus VM” porting approach may be the fact that there just isn’t much worth porting in the first place.

Speaking of Desktop Java, the Java Posse’s Joe Nuxoll comes incredibly close to saying something that everybody in that camp needs to hear. In the latest episode, at 18:10, he says “…gaining some control over the future of mobile Java which, it’s over, it’s Android, it’s done.” He later repeats this assertion that Android is already the only form of mobile Java that matters, and gets agreement from the rest of the group (though Tor Norbye, an Oracle employee, can’t comment on this discussion of the Oracle/Google lawsuit, and may disagree). And this does seem obvious: Android, coupled with the rise of the smartphone, has rendered Java ME irrelevant (to say nothing of JavaFX Mobile, which seems stillborn at this point).

But then at 20:40, Joe makes the big claim that gets missed: “Think of it [Android] as Desktop Java for the new desktop.” Implicit in this is the idea that tablets are going to eat the lunch of traditional desktops and laptops, and those tablets that aren’t iPads will likely be Android-based. That makes Android the desktop Java API of the future, because not only have Swing, SWT, and JavaFX failed, but the entire desktop model is likely threatened by mobile devices. There are already more important, quality, well-known Android apps after two years than the desktop Java APIs produced in over a decade. Joe implies, but does not say, what should be obvious: all the Java mobile and desktop APIs are dead, and any non-server Java work of any relevance in the future will be done in Android.

No wonder Oracle is suing for a piece of it.