Showing posts with label sendmail. Show all posts
Showing posts with label sendmail. Show all posts

31 December 2007

Open Source Unoriginal? - How Unoriginal

Here's a tired old meme that I've dealt with before, but, zombie-like, it keeps on coming back:

The open-source software community is simply too turbulent to focus its tests and maintain its criteria over an extended duration, and that is a prerequisite to evolving highly original things. There is only one iPhone, but there are hundreds of Linux releases. A closed-software team is a human construction that can tie down enough variables so that software becomes just a little more like a hardware chip—and note that chips, the most encapsulated objects made by humans, get better and better following an exponential pattern of improvement known as Moore’s law.

So let's just look at those statements for a start, shall we?

There is only one iPhone, but there are hundreds of Linux releases.


There's only one iPhone because the business of negotiating with the oligopolistic wireless companies is something that requires huge resources and deep, feral cunning possessed only by unpleasantly aggressive business executives. It has nothing to do with being closed. There are hundreds of GNU/Linux distributions because there are even more different kinds of individuals, who want to do things their way, not Steve's way. But the main, highly-focussed development takes place in the one kernel, with two desktop environments - the rest is just presentation, and has nothing to do with dissipation of effort, as implied by the above juxtaposition.

chips, the most encapsulated objects made by humans, get better and better following an exponential pattern of improvement known as Moore’s law

Chips do not get better because they are closed, they get better because the basic manufacturing processes get better, and those could just as easily be applied to open source chips - the design is irrelevant.

The iPhone is just one of three exhibits that are meant to demonstrate the clear superiority of the closed-source approach. Another is Adobe Flash - no, seriously: what most sensible people would regard as a virus is cited as one of "the more sophisticated examples of code". And what does Flash do for us? Correct: it destroys the very fabric of the Web by turning everything into opaque, URL-less streams of pixels.

The other example is "the page-rank algorithms in the top search engines", which presumably means Google, since it now has nearly two-thirds of the search market, and the page-rank algorithms of Microsoft's search engine are hardly being praised to the sky.

But what do we notice about Google? That it is built almost entirely on the foundation of open source; that its business model - its innovative business model - would not work without open source; that it simply would not exist without open source. And yes, Yahoo also uses huge amounts of open source. No, Microsoft doesn't, but maybe it's not exactly disinterested in its choice of software infrastructure.

Moreover, practically every single, innovative, Web 2.0-y start-up depends on open source. Open source - the LAMP stack, principally - is innovating by virtue of its economics, which make all these new applications possible.

And even if you argue that this is not "real" innovation - whatever that means - could I direct your attention to a certain technology known colloquially as the Internet? The basic TCP/IP protocols? All open. The Web's HTTP and HTML? All open. BIND? Open source. Sendmail? Open source. Apache? Open source. Firefox, initiated in part because Microsoft had not done anything innovative with Internet Explorer 6 for half a decade? Open source.

But there again, for some people maybe the Internet isn't innovative enough compared to Adobe's Flash technology.

24 July 2006

The Internet Goes...Open Source

There is a great irony at the heart of the Internet. Free software and its characteristic distributed development method were made possible by the Internet. Similarly, many of the earliest free software programs - Sendmail, BIND etc. - helped create the Internet. And yet today, the knots of the Net's interconnections - the routers - are generally proprietary (and usually from Cisco).

So here's an idea: how about creating an open source router? Enter Vyatta, which is doing precisely that. It's been working on the idea for a while, and, according to GigaOM, is close to launching its first product.

Assuming they get it right, I don't see any reason why this shouldn't steadily chip away at Cisco's dominant market share, just as every other open alternative to commoditised products has done. As they do, expect other open source solutions to enter this market soon.

27 March 2006

The Science of Open Source

The OpenScience Project is interesting. As its About page explains:

The OpenScience project is dedicated to writing and releasing free and Open Source scientific software. We are a group of scientists, mathematicians and engineers who want to encourage a collaborative environment in which science can be pursued by anyone who is inspired to discover something new about the natural world.

But beyond this canonical openness to all, there is another, very important reason why scientific software should be open source. With proprietary software, you simply have to take on trust that the output has been derived correctly from the inputs. But this black-box approach is really anathema to science, which is about examining and checking every assumption along the way from input to output. In some sense, proprietary scientific software is an oxymoron.

The project supports open source scientific software in two ways. It has a useful list of such programs, broken down by category (and it's striking how bioinformatics towers over them all); in addition, those behind the site also write applications themselves.

What caught my eye in particular was a posting asking an important question: "How can people make money from open source scientific software?" There have been two more postings so far, exploring various ways in which free applications can be used as the basis of a commercial offering: Sell Hardware and Sell Services. I don't know what the last one will say - it's looking at dual licensing as a way to resolve the dilemma - but the other two have not been able to offer much hope, and overall, I'm not optimistic.

The problem goes to the root of why open source works: it requires lots of users doing roughly the same thing, so that a single piece of free code can satisfy their needs and feed off their comments to get better (if you want the full half-hour argument, read Rebel Code).

That's why the most successful open source projects deliver core computing infrastructure: operating system, Web server, email server, DNS server, databases etc. The same is true on the client-side: the big winners have been Firefox, OpenOffice.org, The GIMP, Audacity etc. - each serving a very big end-user group. Niche projects do exist, but they don't have the vigour of the larger ones, and they certainly can't create an ecosystem big enough to allow companies to make money (as they do with GNU/Linux, Apache, Sendmail, MySQL etc.)

Against this background, I just can't see much hope for commercial scientific open source software. But I think there is an alternative. Because this open software is inherently better for science - thanks to its transparency - it could be argued that funding bodies should make it as much of a priority as more traditional areas.

The big benefit of this approach is that it is cumulative: once the software has been funded to a certain level by one body, there is no reason why another should't pick up the baton and pay for further development. This would allow costs to be shared, along with the code.

Of course, this approach would take a major change of mindset in certain quarters; but since open source and the other opens are already doing that elsewhere, there's no reason why they shouldn't achieve it in this domain too.

18 March 2006

Economistical with the Truth

The Economist is a strange beast. It has a unique writing style, born of the motto "simplify, then exaggerate"; and it has an unusual editorial structure, whereby senior editors read every word written by those reporting to them - which means the editor reads every word in the magazine (at least, that's the way it used to work). Partly for this reason, nearly all the articles are anonymous: the idea is that they are in some sense a group effort.

One consequence of this anonymity is that I can't actually prove I've written for title (which I have, although it was a long time ago). But on the basis of a recent showing, I don't think I want to write for it anymore.

The article in question, which is entitled "Open, but not as usual", is about open source, and about some of the other "opens" that are radiating out from it. Superficially, it is well written - as a feature that has had multiple layers of editing should be. But on closer examination, it is full of rather tired criticisms of the open world.

One of these in particular gets my goat:

...open source might already have reached a self-limiting state, says Steven Weber, a political scientist at the University of California at Berkeley, and author of “The Success of Open Source” (Harvard University Press, 2004). “Linux is good at doing what other things already have done, but more cheaply—but can it do anything new? Wikipedia is an assembly of already-known knowledge,” he says.

Well, hardly. After all, the same GNU/Linux can run globe-spanning grids and supercomputers; it can power back office servers (a market where it bids fair to overtake Microsoft soon); it can run on desktops without a single file being installed on your system; and it is increasingly appearing in embedded devices - mp3 players, mobile phones etc. No other operating system has ever achieved this portability or scalability. And then there's the more technical aspects: GNU/Linux is simply the most stable, most versatile and most powerful operating system out there. If that isn't innovative, I don't know what is.

But let's leave GNU/Linux aside, and consider what open source has achieved elsewhere. Well, how about the Web for a start, whose protocols and underlying software have been developed in a classic open source fashion? Or what about programs like BIND (which runs the Internet's name system), or Sendmail, the most popular email server software, or maybe Apache, which is used by two-thirds of the Internet's public Web sites?

And then there's Wikimedia, which powers Wikipedia (and a few other wikis): even if Wikipedia were merely "an assembly of already-known knowledge", Wikimedia (based on the open source applications PHP and MySQL) is an unprecedentedly large assembly, unmatched by any proprietary system. Enough innovation for you, Mr Weber?

But the saddest thing about this article is not so much these manifest inaccuracies as the reason why they are there. Groklaw's Pamela Jones (PJ) has a typically thorough commentary on the Economist piece. From corresponding with its author, she says "I noticed that he was laboring under some wrong ideas, and looking at the finished article, I notice that he never wavered from his theory, so I don't know why I even bothered to do the interview." In other words, the feature is not just wrong, but wilfully wrong, since others, like PJ, had carefully pointed out the truth. (There's an old saying among journalists that you should never let the facts get in the way of a good story, and it seems that The Economist has decided to adopt this as its latest motto.)

But there is a deeper irony in this sad tale, one carefully picked out by PJ:

There is a shocking lack of accuracy in the media. I'm not at all kidding. Wikipedia has its issues too, I've no doubt. But that is the point. It has no greater issues than mainstream articles, in my experience. And you don't have to write articles like this one either, to try to straighten out the facts. Just go to Wikipedia and input accurate information, with proof of its accuracy.

If you would like to learn about Open Source, here's Wikipedia's article. Read it and then compare it to the Economist article. I think then you'll have to agree that Wikipedia's is far more accurate. And it isn't pushing someone's quirky point of view, held despite overwhelming evidence to the contrary.

Wikipedia gets something wrong, you can correct it by pointing to the facts; The Economist gets it wrong - as in the piece under discussion - and you are stuck with an article that is, at best, Economistical with the truth.