Why Linux is failing on the desktop

I should’ve known better. I wrote a post a few days ago detailing my frustration with Linux, and suggested (admittedly in very indelicate terms) that the global effort to develop Linux into an alternative to general use desktop OSes such as Windows and OS X was a waste of resources. I have absolutely no idea how 400 people (most of them apparently angry Linux fans based on extrapolation from the comments) managed to find their way to the article within hours of me posting it. I think they must have a phone tree or something. Nonetheless, I should’ve been more diplomatic. So, as penance, I will here attempt to write a more reasonable post better explaining my skepticism of desktop Linux, and will even try to offer some constructive suggestions. I’m sure this post will get no comments, in keeping with the universal rule of the Internet that the amount of attention a post recieves is inversely proportional to the thought that went into it.

Before starting, let’s just stipulate something supported by the facts of the marketplace. Desktop linux has been a miserable failure in the OS market. If you’ve consumed so much of the purple koolaid prose of the desktop linux community that you can’t accept that, you might as well quit reading now. I think every year for the past decade or so has been “The Year Linux Takes Off.” Except it’s actually going down in market share at this point.

As I pointed out in my first post, (perhaps a bit rudely) this isn’t just a bad performance, it’s a tragic waste of energy. Can you imagine the good that could’ve been done for the world if these legions of programmers hadn’t spent over a decade applying their expertise (often for free) on a failure like desktop Linux? For one, they could’ve made a lot of money with that time and donated it to their favorite charities, assuming they were as hellbent on not making money as they appear to have been. And two, it might have been nice to see what useful things they would’ve produced had they done something somebody were actually willing to pay for as opposed to trying to ram desktop linux down the collective throat of the world. You know, sometimes the evil capitalistic market does useful things, like keeping people from wasting their time.

Open Source community projects put innovation and scale at odds. If an Open Source project is to be large, it must rely on the input of a huge distributed network of individuals and businesses. How can a coherent vision arise for the project in such a situation? The vacuum left by having no centralized vision is usually filled by the safe and bland decision to just copy existing work. Thus, most large scale Open Source efforts are aimed at offering a open alternative to something, like Office or Windows, because no vision is required, just a common model to follow. This is not to say that innovation is not found in the OS community, but it is usually on the smaller scale of single applications, like Emacs or WordPress, that can grow from the initial seed of a small group’s efforts. The Linux kernel is a thing of beauty, and is actually a small, self-contained project. But the larger distribution of a deskop OS is another matter, and here we find mostly derivative efforts.

An OS is only as good as the software written for it. One of the great things about Open Source is that there is a tremendous power in being able to take an existing project and spawn off a new one that fixes a few things you didn’t like. While this is fine for an application, it’s problematic for a piece of infrastructure software expected to serve as a reliable, standard substrate for other software. Any Linux application requiring low-level access to the OS will have to be produced in numerous versions to match all the possible distros and their various revisions. See OpenAFS for an example of how ridiculously messy this can get. For apps, do you support GNOME or KDE or both or just go, as many do, for the lowest common denominator? And supporting hardware accelerated 3D graphics or video is the very definition of a moving target. There are multiple competing sound systems, and neither are nearly as clean or simple as what’s available on Windows or the Mac. The result is usually a substandard product relative to what can be done on a more standardized operating system. Compare the Linux version of Google Earth or Skype to the Windows version of the same to see what I’m talking about. (That is, if you can even get them working at all with your graphics and sound configuration.)

Pointing the finger at developers doesn’t solve the problem. To begin with, for the reasons I’m explaining in this essay, some of the poor quality of Linux software is the fault of the Linux community diluting its effective market share with too many competing APIs. Even without this aggravating factor, developers just can’t justify spending significant time and money maintaining a branch of their software for an OS that has less than 3% market share, regardless. Because of this, the Linux version of commercial software is often of much lower quality than the Windows or Mac version. A prime example of this is the ubiquitous and required FlashPlayer. It consistently crashes Firefox. Is it Adobe’s fault? Maybe. But when this happens to somebody, do you think they know to blame Adobe or Firefox or just Linux? Does it even matter? It’s just one more reason for them to switch back to Windows or the Mac. And for the record, why should Adobe bother to make a good version of FlashPlayer for a platform with no stability and few users?

This solution to all of this is easy to state, but hard to enforce. (There are downsides to Freedom.) Somehow the fractious desktop Linux community must balance the ability to innovate freely with the adoption of, and adherence to, standards. Given the low market share, linux has to be BETTER than Windows as a development target, not just as good or worse. However, one of the problems with Linux seems to be a certain arrogance on the part of its developers, who consider applications as serving Linux, and not the other way around. An OS is only as good as the programs written for it, and perhaps the worst thing about Linux is that it hinders the development of applications by constantly presenting a moving target, and requiring developers to spend too much time programming and testing for so many variants.

It’s downright laughable that an OS with single digit market share would further dilute its market share by having two competing desktops. Yeah, I know KDE and GNOME are supposedly cooperating these days. But (a) it’s probably too late, (b) it’s not perfect and (c) even if you disagree that it dilutes effective market share, it still dilutes the development effort to maintain two desktop code bases. For godsake, somebody kill KDE and make GNOME suck less! Yeah, I know that’s never going to happen. That’s why the title of this essay is what it is.

One of the few things MS gets right is that for all they get wrong, they do understand one crucial thing: developers are the primary customer of an operating system. They may advertise to consumers, but they know that at the end of the day it is developers whom they serve. The Linux community just doesn’t get this.

Unfortunately, I don’t have much hope of desktop Linux ever becoming sufficiently standardized. If the focus becomes on making Linux friendly to applications and their developers, the distributions must be become sufficiently standardized as to render them effectively consolidated, and the desktop frameworks so static as to negate much of their Open Source character. For Linux to become more developer friendly, it would have to essentially become a normal operating system with a really weird economic model.

OS development actually requires physical resources. The FOSS movement is based on the idea that information is cheap to distribute, and thus it is a tremendous leverage of the human capital to develop it. People write some software, and the world perpetually benefits with little marginal cost. That works beautifully for applications, but OS development, especially desktop OS development, requires tremendous continuous resources to do correctly. For one, you need tons of machines of different configurations and vintages on which to test, which costs money. And you need a large team of people to run all those tests and maintain the machines. Any respectable software company dedicates a tremendous amount of their budget to QA, fixing stupid little bugs that happen to come up on obscure hardware configurations. Linux just can’t achieve the quality control of a commercial OS. And that’s probably why when I “upgraded” from Gutsy to Hardy, my machine no longer completes a restart on its own. Maybe this will get fixed when the planets align and somebody with the same motherboard as me who also knows how the hell to debug this runs into the same problem, but I’m starting to get weary of this, and the apparently I’m not alone based on the declining desktop linux market share.

Continue reading “Why Linux is failing on the desktop”

The results of my annual desktop Linux survey are in: It still sucks!

Note: If you are a member of the Orthodox Church of Linux and you suffer from high blood pressure, you might want to consult a physician before reading this. In fact, you may just want to skip to my follow up article, which presents my criticisms of Linux in a much more explanatory form.

I’m a sucker for a good story and that Linux certainly is: millions of programmers working out of the sheer goodness of their hearts on a project to benefit humanity by providing a free operating system. Never mind that they only cost about $100 anyway, and represent less than 10% of the cost of a new computer. Microsoft just makes us all so angry that if we have to spend billions of person hours so that we can all save $100 every few years, so be it. Time well spent.

So, it’s with heady optimism and hope for the future that once a year I anxiously download and install the latest consumer desktop incarnation of Linux, my eyes watering with the promise of life without Microsoft. For the past six years, I have installed Linux at some point during the year with the hope of never having to go back. And for the past six years I have used Linux for a week or so, only to inevitably capitulate after tiring of all the little things that go wrong and which require hours searching on the web for the right incantation to type in /etc/screwme.conf. While every year it gets a little bit more reliable, I am always guiltily relieved to finally get back to Windows, where there are no xorg.conf files to get corrupted or fstab files to edit.

This year, I decided to try Ubuntu 7.10. Given the hype, I had very high hopes. It installed without a hitch, and came up working fine for the most part. Just a small problem with the screen resolution being off and my second monitor not being recognized. I thought, “That should be easy to take care of. This could be the year!”

Continue reading “The results of my annual desktop Linux survey are in: It still sucks!”

Vista firewall resetting after updates?

The third in what I’m sure will be a long series of articles on the problems and misery of Vista. I’ve noticed that after certain updates, the Windows Firewall settings are reset. So, if you notice that certain network programs aren’t working, or you can’t log in via Remote Desktop, you may have had your firewall reset by an update.

As far as I know, there is no solution to this. Resetting the firewall is apparently a necessary part of updating certain system components. Of course, if Microsoft cared about the convenience of its users, I’m sure there’s something they could do about this. I think linux manages to update security components without resetting their configuration.

My recommendation is to just turn it off and get a hardware firewall. Securing Windows by trusting the firewall made by Microsoft is like paying your dog to guard your cheese fort.

Smoking and mirrors

The AP is reporting on a study showing that preventative medicine for obesity and smoking actually results in higher healthcare costs. For example, smoking will increase your life expectancy by about 8%, but will increase your healthcare costs by 25%. This is the result of the disproportionate amount of money spent to keep people alive at the end of their lives. Studies have shown that one third of the lifetime cost of healthcare is incurred over the age of 85 (for those living that long). From the report:

Cancer incidence, except for lung cancer, was the same in all three groups. Obese people had the most diabetes, and healthy people had the most strokes. Ultimately, the thin and healthy group cost the most, about $417,000, from age 20 on.

The cost of care for obese people was $371,000, and for smokers, about $326,000.

The results counter the common perception that preventing obesity will save health systems worldwide millions of dollars.

“This throws a bucket of cold water onto the idea that obesity is going to cost trillions of dollars,” said Patrick Basham, a professor of health politics at Johns Hopkins University who was unconnected to the study. He said that government projections about obesity costs are frequently based on guesswork, political agendas, and changing science.

What’s especially interesting and relevent about this is that both Obama and Clinton insist that much of the tremendous cost of their healthcare proposals will be paid for by better preventative healthcare, especially for obesity. It’s pretty much a given that whatever they claim the cost will be, you can pretty much double that. Anyone who doesn’t believe me can look at Massachusetts’ universal coverage initiative (or anything else the government does, for that matter). But with this new study, maybe even that is an underestimate.

Instead of trying to placate us by “lowering” health care costs by simply shifting them onto our tax costs (plus overhead) it would be nice if one candidate would come out for actually lowering the cost of healthcare in a meaningful way. Tort reform and more intelligent allocation of research funding would do a lot. There’s also the radical notion that maybe we could make some sacrifices and just say ‘no’ to some of the incredibly expensive yet only marginally more effective medical technology that we’re always paying for. A good example is 3D ultrasound. Do we really need to pay billions of dollars as a nation just to make really creepy renderings of babies? Especially when its those babies who are going to inherit the debt—and precedent—for the frivolity.

It seem to me the main problem being addressed by these proposals is that there are a lot of people who can’t afford healthcare. Why not just buy them healthcare as part of our existing welfare infrastructure? Why the need for yet another beaurocracy? We should agree to more government only with reluctance, not relish.

Vintage technology: 757 flightdeck

A picture of the flight deck of a 757 we got to play with (on the ground) after a class I took on cockpit automation. (Click on the picture for a larger version.) The 757 was developed in the late 70s, and its delivery customer in 1982 was Eastern Airlines. (Remember them?)

I’m not certain, but I believe this was the last Boeing flightdeck to have CRT displays, as opposed to LCDs. The reason this is worth mentioning is that it’s one of those examples of technology going backwards. In CRT displays, the symbols (e.g. the engine arcs in the middle top display) are drawn not as rows and columns of dots, but the same way a person might draw them. For example, a circle is made by scanning the electron beam in a circle, creating a gorgeous, bright, perfect circle. Each letter is written by tracing the beam along the outline of the letter, is if writing it longhand. No pixels! It takes a rather large computer to handle all of this, located deep in the belly of the airplane. Despite being “antiquated” technology, the displays are utterly striking and unlike anything you see today. LCDs may be cheaper, but there’s something about CRTs, especially vector-based ones, that are a pity to see go.

Similarly, there is nothing that can replace the efficacy of analog gauges, in some ways. On the 757 there are still “steam gauges” showing speed and altitude to the left and right of the attitude indicator CRT (the blue and brown ball). They are easy to read, sharp, and wholly independent. I found that when flying the simulator, I would tend to use them over the more central digital speed and altitude “tapes” on either side of the attitude indicator. Integrated LCD panels are cheaper, yes, but I don’t think anybody will ever make one that improves upon the immediate readability of a steam gauge instrument. You can see the approximate angle and rate of change of a dial out of the corner of your eye, but reading a digital number requires eye movement and a bit of mental processing, especially if the numbers are changing rapidly or you’re in turbulence. Two steps forward…