08 August 2006

The Restaurant at the End of the Universe

Matt Asay has an excellent riposte to a singularly wrong-headed post entitled "Open source won't doom traditional enterprise software". As he rightly says, the real question is not the one the above piece thinks to deal with - "Is Enterprise Software Doomed?" - but

"What will be the primary bases for competition once everything is more (or less) open source?"

I believe the answers are also an explanation of why open source does doom traditional enterprise software, because the key differentiators will be things like innovation and serving the customer. Whatever lip-service traditional software companies pay to these ideas, the closed nature of their code, and the fact that customers are locked into their products means that they simply don't deliver either in the way that open source companies will do once they become the norm.

When Elephant Seals Collide

You can't beat a legal battle involving two overlapping pieces of legislation. The sight of lawyers having at each other, secure in the knowledge that the law is on their side, reminds me of nothing so much as two great elephant seals, thwacking each other vigorously, their proboscises all a-jiggle.

We could be in for another of these spectacles, according to this Techdirt article. It seems that the old End-User Licence Agreement (EULA) is being used to trump copyright fair use provisions, and that this might eventually go to the US Supreme Court to sort out (but don't hold your breath for EULAs getting spanked).

Of course, for those of us who use free software, EULAs are but dim memories from some strange, barbaric past, with no question of trumping anything.

Reasons Not to Use Closed Source: No. 471

I've written a couple of times about cases that demonstrate graphically why closed source software is a Bad Thing, but even they pale somewhat beside this story.

The robot that parks cars at the Garden Street Garage in Hoboken, New Jersey, trapped hundreds of its wards last week for several days. But it wasn't the technology car owners had to curse, it was the terms of a software license.

A dispute over the latter meant that the software simply stopped working. And since it was closed source, nothing could be done about it. The results were dramatic:

The Hoboken garage is one of a handful of fully automated parking structures that make more efficient use of space by eliminating ramps and driving lanes, lifting and sliding automobiles into slots and shuffling them as needed. If the robot shuts down, there is no practical way to manually remove parked vehicles.

I bet the garage owners wished they'd chosen open....

Nothing to Fear But Fear Itself

One of the tensions that emerges from time to time in this blog is that between openness and security. In the current climate of the so-called "war on terror", openness is typically characterised as dangerous, irresponsible even, because it gives succour to "them".

Terrorism is not to be trivialised, but it's a question of keeping things in perspective. Magnifying the threat unreasonably and acting disproportionately simply hands victory to those who wish to terrorise. This seems pretty obvious to me, but if you want a rigorously-argued version, you could hardly do better than this one, by John Mueller.

Here's a sample, on the issue of perspective:

[I]t would seem to be reasonable for those in charge of our safety to inform the public about how many airliners would have to crash before flying becomes as dangerous as driving the same distance in an automobile. It turns out that someone has made that calculation: University of Michigan transportation researchers Michael Sivak and Michael Flannagan, in an article last year in American Scientist, wrote that they determined there would have to be one set of September 11 crashes a month for the risks to balance out. More generally, they calculate that an American’s chance of being killed in one nonstop airline flight is about one in 13 million (even taking the September 11 crashes into account). To reach that same level of risk when driving on America’s safest roads — rural interstate highways — one would have to travel a mere 11.2 miles.

(Via Boing Boing.)

Microsoft's Gift to Firefox

Firefox has been incredibly lucky. It has taken Microsoft an extraordinary amount of time to face up to the challenge this free browser represents, during which Firefox has notched up a serious market share that won't be going away any time soon.

However, my great fear was that once Internet Explorer 7 came out, the appeal of Firefox to people who wanted a stable, standards-based browser would diminish considerably. After all, good enough is generally good enough, and surely, I thought, Microsoft will get this one right, and produce what's necessary?

If this report is anything to go by, it seems not.

Incredibly, Microsoft will not be supporting fully the Cascading Style Sheet 2 (CSS 2) standard. As the story explains:

The most critical point in Wilson's post, in my mind, is Microsoft's admission that it will fail the crucial Acid2 browser-compliance test , which the Web Standards Project (WaSP) designed to help browser vendors ensure that their products properly support Web standards. Microsoft apparently disagrees. "Acid2 ... is pointedly not a compliance check," Wilson noted, contradicting the description on the Acid2 Web site. "As a wish list, [Acid2] is really important and useful to my team, but it isn't even intended, in my understanding, as our priority list for IE 7.0." Meanwhile, other browser teams have made significant efforts to comply with Acid2.

If you look at the CSS 2 standard, you'll note that it became a recommendation over eight years ago. And yet Microsoft is still not close to implementing it fully, unlike other browsers. Even if you argue that CSS 2 is only of interest to advanced coders, or at best a standard for the future, it is nonetheless a key test of a browser development team's attitudes and priorities.

This is a tremendous opportunity for Firefox: provided it continues to support standards better than Microsoft - and this now looks likely - it will occupy the high ground with all that this implies in terms of continuing to attract users and designers. Thanks, Microsoft.

Capillary Growth

I see my old chums at OSS Watch have come out with a survey of open source use in higher and further education institutes in the UK, and it makes interesting reading.

The extent to which open source is creeping into higher education almost without anyone noticing is striking. From the summary:

Most institutions (69%) have deployed and will continue to deploy OSS on their servers. Generally, the software on servers is a mix of OSS and proprietary software (PS). The use of OSS is most common for database servers (used by 62% of institutions), web servers (59%) and operating systems (56%).

This is particularly true on the desktop. Although GNU/Linux is not much used there, free software apps are:

Microsoft Office and Internet Explorer are deployed by all institutions on most desktops. Other commonly deployed applications are Microsoft Outlook (82%) and Mozilla/Firefox (68%). The latter's use is now considerably higher than in 2003.

Not mentioned in this summary, is the share for OpenOffice.org (23%) and Thunderbird (22%) both of which are eminently respectable. It's also noteworthy that some 56% of further education establishments surveyed used Moodle.

07 August 2006

Turning Back Genomic Time

Bioinformatics allows all kinds of information to be gleaned about the gradual evolution of genomes. For example, it is clear that many genes have arisen from the duplication of an earlier gene, followed by a subsequent divergent specialisation of each duplicate under the pressure of natural selection.

New Scientist describes an interesting experiment to turn back genomic time, and to re-create the original gene that gave rise to two descendants. Moreover, that new "old" gene was shown to work perfectly well, even in today's organisms.

What's impressive about this is not just the way such information can be teased out of the raw genomic data, but that it effectively allows scientists to wind evolution backwards. Note that this is possible because the dynamics of natural selection are reasonably well understood.

Without the idea of natural selection, there would be no explanation for the observed divergent gene pairs, and the experimental fact that their putative ancestor does, indeed function in their stead, as predicted - other than the trivial one of saying that it is so because it was made so. Occam's razor always was the best argument against Intelligent Design.

There's No FUD Like an Old FUD

As readers of these posts may know, I am something of a connoisseur of Microsoft's FUD. So I was interested to come across what looked like a new specimen for my collection:

"One of the beauties of the open-source model is that you get a lot of flexibility and componentization. The big downside is complexity," Ryan Gavin, Microsoft's director of platform strategy, said on the sidelines of the company's worldwide partner conference in Boston last month.

Alas, digging deeper showed this is hardly vintage FUD. Take, for example, the prime witness for the prosecution:

IBS Synergy had started developing products for the Linux platform back in 1998 but gave Linux the boot in early 2004, and now builds its software on the Windows platform. Lim said this was because the company's developers were spending more time hunting for Linux technical support on the Web, and had less time to focus on actual development work.

Right, so these are problems a company had two and half years ago: why is Microsoft raising them now? And is it not just possible that things have moved on somewhat in those 30 months?

So really this is the old "there are too many distributions, you can't get the support" FUD that was so unconvincing that I didn't even bother including it in my FUD timeline above. After all, businesses tend to use, well, Red Hat, SuSE and er, well, that's about it, really. (Via tuxmachines.org.)

Wales's World-Wide Wikia

I wrote about Wikia when it was launched a while back. Now we have WorldWiki, a fairly obvious application of wikis to travel guides - with plenty of advertising potential.

I mention it for two reasons. First, this will be a good test-case of the Wikia idea - if Wales can't get this one up and running, he may have problems with the whole scheme. Secondly, the home page currently has a rather fetching Canaletto-esque view of the Grand Canal, taken from the Rialto if I'm not much mistaken. (Via TechCrunch.)

Blogging the Bloggable

No one has a better bird's eye view of the blogosphere than Dave Sifry, which means that his quarterly report on the same is unmissable. One comment in particular is worth noting.

In the context of the 50 million blog mark being reached on 31 July, he writes:

Will I be posting about the 100 Millionth blog tracked in February of 2007? I can't imagine that things will continue at this blistering pace - it has got to slow down. After all, that would mean that there will be more bloggers around in 7 months than there are bloggers around in total today. I shake my head as I am writing this - the only thing still niggling at my brain is that I'd have been perfectly confident making the same statement 7 months ago when we had tracked our 25 Millionth blog, and I've just proven myself wrong.

For the sake of being wrong, I'll stick my neck out and say that I think he will be reporting 100 million blogs in February next year. The reason is simple - literally.

Blogs are so simple to write, that I think practically everyone who has a Web site will convert unless they have very strong reasons - for example commercial ones - to stick with the free-form Web page. Everyone else - and that's billions of us - just needs a suitable bucket for pouring our thoughts into. And the more basic the bucket, the easier it is to use, and the more that will use it. If this thinking is correct, another 50 million - or even 100 million - blogs is not so hard to achieve.

Resolving the Free Content Licence Madness

Although the most famous example of free content is Wikipedia, it is unusual in that it uses the GNU Free Documentation Licence, rather than one of the better-known Creative Commons licences. And that's a problem, because it makes it hard to mix and match content from different projects.

One man well aware of this - not least because he is the cause of the problem, albeit unwittingly - is Larry Lessig. Heise Online have a good report covering what he said on the topic at the Wikimania conference:

"We need a layer like the TCP/IP layer which facilitates interoperability of content, allows content to move between ´equivalent´ licenses," Mr. Lessig declared, "where what we mean by equivalent is licenses where people mean the same thing. So the GNU Free Documentation License and the Creative Commons Attribution ShareAlike license is saying the same thing: Use my content however you want, to copy, to modify, as long as you give me attribution, as long as the modification is distributed under an equivalent license." The legal differences between the licenses should be bridged, he observed. The various types of licenses could compete with one another, thereby protecting against the weaknesses of any particular license, he stated.

As the two worlds of Wikipedia and CC content continue to grow, addressing this is becoming a matter of some urgency.

Crazy Like a Vixie

As I mentioned elsewhere, I did many interviews when I was writing Rebel Code. Very many. But one key person whom I just could not convince to talk to me was Paul Vixie, Mr BIND.

So when I saw that The Inquirer has instituted a new series of interviews called Internet Gods with none other than Mr V, my heart sank. And then rose.

Why ID Cards Are Stupid, Part 294

Because they can be cracked. So, in exactly how many ways does this scheme have to be found wanting before it is finally taken behind the shed and put out of its misery, Mr Blair?

06 August 2006

Who's Afraid of a Terabyte?

So Google Research is offering to send anyone a trillion words - for free. But what does the ready availability of this kind of data deluge mean for the world? Here's what I wrote in an essay called "Digital reality", which appeared in a strange little book entitled Glanglish, back in 1989:

A few hundred years from now the world currency will be the Tera. Short for Terabyte, it represents a million million bytes of digital information. Roughly speaking this corresponds to half a million printed books. The data contained in a Tera will be arbitrary but meaningful: it will be the equivalent of a random selection of a twentieth of the British Library's present holdings. Within such a deluge there will be countless useful facts as well as countless useless ones. The sheer volume will ensure that there are enough of the former in every Tera to provide near parity in value with all other Teras of random structured information.

Surprisingly, a Tera is a small unit. A human being processes about two Teras every hour; multiply that by a world population of tens or hundreds of billions and you have millions of billions of new Teras every year. Add in the billions of computers and their information, as well as the countless billions of Teras from the past, and the quantity becomes unimaginable.

Billions of computers because by this time they will be ubiquitous. The basic models will be as small, cheap, and easy to use as a pen or pencil. Like pencils, they will be thrown away after a couple of uses. But where writing implements can be said only in a metaphorical sense to offer half a million English words or the ability to perform operations in calculus, the pen-sized computers will possess these skills literally, as well as providing myriad other functions.

But nobody will bother using them, any more than people use slide rules or log tables today. The real, state-of-the-art computers will be invisible. They will be the chair you sit in, the wall you lean against, the ground you walk on. The chair will not have a computer as such: it will be one. Or rather, every aspect of it - its shape, its colour, its position - will be the output from one.

Such environmental computers will no longer model reality through simulations: instead, they will offer an infinitely detailed alternative version that merges seamlessly into the old, physical variety. Potentially, every aspect of our world will be formed by computers capable of creating every experience.

Most people will be hooked on this drug of artificial, digital reality. Unlike the already addictive arcade games and television serials of today - which are flint tools in comparison to this future technology - digital reality will not just be a temporary substitute for real life, it will replace it totally, until the latter has no independent meaning. Like all junkies, digital addicts will habituate and constantly demand fresh stimulation in the form of new, manufactured experiences. To provide them, the billions of environmental computers must feed ravenously and unceasingly off the only source of experience's raw material, the Teras of structured data held around the planet. Their competing demands for a limited resource will valorise it; information will become society's most sought-after commodity, its invisible gold, its weightless coinage. Those that control that information, the data lords, will rule the world.

Looks like I was out by several orders of magnitude for both size and time-scale. But then they do say that making predictions is hard, especially about the future.

It's Top of the Wikimania Pops

And talking of Wikimania, if you're short of something to listen to on the beach, there's a bunch of fine MP3s just waiting to be downloaded on this wiki page, which brings together most of the speeches from the recent conference. There's also a few transcripts for those who prefer to mull over the words. (Via Open Access News.)

05 August 2006

Of Wikiversity and Diversity

The world of Wikipedia goes from strength to strength. At the Wikimania conference, Jimmy Wales made several interesting announcements, including that of launching Wikiversity, a collection of wikified online courses. This is great news, but I can't help wondering whether it might not help talking to all the other open courseware projects out there. Diversity is all very well, but a little coordination never hurt anyone.

PLoS ONE: Plus and Minus

The good news: the innovative PLoS ON is now live. This should be interesting.

The bad news: its designers have gone bonkers, adopting a Laura Ashley colour scheme and a horrible selection of typefaces. Could we try again, please?

Foul Trademarks

As I wrote recently, I'm not keen on the term "pirates" being bandied about indiscriminately. That applies to things like "bio-piracy" and even the neologism "lingo-piracy":

We’ve heard about biopiracy, the practice of multinational corporations claiming patent rights in the genetic resources of plants and crops in a developing country. Now we are seeing the rise of what might be called lingo-piracy. Brazil is fed up with foreign companies claiming trademarks in common Brazilian words for native fruit, foods and plants. The trademarks give the foreign companies exclusive marketing rights in the words, which in turn inhibits Brazilians from selling their own native foods and fruit in foreign markets.

But I do agree we need a term for the concept so that it can be named and shamed whenever it is encountered. The central issue here is essentially bad trademarks; since we have "fair trade", perhaps we can introduce the concept of "foul trademarks" to cover the situation.

04 August 2006

Open Access to Avian Flu Data: One Down

Terima kasih: Indonesia has agreed to provide open access to its avian flu data. Now all we need are for the other couple of dozen affected countries to do the same. (Via Open Access News.)

A Mathematician Writes

One of the first things that children learn in maths is to do a quick check of their answers. Not quite sure if your calculation of 6.9574635 times 4.085647 is correct? Well, 7 times 4 is 28, so your answer really ought to be pretty close to that.

Common sense, right?

Wrong: Amazon says it's a brilliantly-novel idea that no one has ever had before in the history of the universe - and they have a patent to prove it. Words - and numbers - fail me. (Via Techdirt.)

Open Source Citizenship

There's a bit of a public ding-dong being conducted in the pages of some of the IT titles over what constitutes good open source citizenship.

Matt Asay kicked things off:

Aren't Yahoo! and Google missing the point or, rather, conveniently looking past it? Open source isn't about beneficent companies giving code to the impoverished underclass. It's about working on code collaboratively within a community.

To which Yahoo's Jeremy Zawodny replied:

So let's suppose that we decided to release "what we can" into the open source world. Of course, there'd be a lot of legal vetting first. Code licensing is a real mine field, but let's suppose that we cleared that hurdle. It would look as if Yahoo was doing exactly what businesses looking to get into open source are told NOT to do: throwing some half-baked code "over the wall" and slapping a license on it.

But I think that both are being somewhat short-sighted.

Neither Google nor Yahoo is obliged to share their code, since they don't distribute it. They are perfectly entitled to keep it snug within their respective corporate firewalls. In any case, it's unlikely to be widely useful to other projects, so the gift would be large token. But the point is they do both benefit from open source, and it is therefore in their interest to support it as much as possible.

The solution is not to chuck code "over the wall", but rather to help open source in other ways. Google, to its credit, is already doing this, with its Summer of Code projects, its tie-up with Firefox and mostly recently its open source code repository.

As I've written before, Google's track record is not perfect, but it's certainly better than Yahoo's, which might try a little harder at being a good open source citizen in this respect. All it requires is a few high-profile grants to needy free software projects. How about it, Yahoo?

Why It's Called the 'Domesday Book'

The Domesday Book was William the Conqueror's list of swag that he won after the Battle of Hastings. You might, therefore, expect it to be called "Bill's Big Book of Booty" or some such Anglo-Norman equivalent. The actual name chosen is curious, but the explanation is straightforward.

"Domesday" - or "Doomsday" as we would write it - refers to the Last Judgement, the End of Time; and that, apparently, is when the British public is going to gain free access to one of the key documents in its history. The book has just gone online, but it costs an eye-popping £3.50 to see a page. Clearly the National Archives need to modify their tagline "Download your history..." to "Download your history...and pay through your nose for the privilege."

Open Source Oxymoron

For me Web TV is a contradiction in terms. The Web stands for intelligent interactivity, TV for dumb passivity. However, given that TV via the Internet is coming, whether I like it or not, better that it be open rather than closed source. And it looks like that's precisely what will happen. (Via LXer.)

Linus Does Not Scale

One of the darkest moments in the history of free software occurred in September 1998. For perhaps the first - and one hopes the last - time the Linux kernel came perilously close to forking.

The problem was simple: Linus had become a victim of Linux's success. He was unable to cope with the volume of patches that were being sent to him. In the memorable words of Larry McVoy at the time, "Linus does not scale."

That scaling problem was solved by working on a better version control system (what became BitMover, later replaced by the memorably-named Git), as wll as handing off some of Linus's work to others. In the case of the kernel, this could be achieved by mutual agreement, but more generally it is hard to divide up a task among many contributors.

There are now several sites that have sprung up to address this problem. One of them is Amazon's Mechanical Turk, which I wrote about some time back, although I rather missed the key point, which is the use of distributed human intelligence to carry out those kind of tasks that computers presently struggle with. A more recent entrant is Mycroft, discussed in this C|net piece.

Also worth noting is the Crowdsourcing blog, which is a follow-up to the Wired article on the same (and doubtless a feeder to the inevitable book on the subject).

What's interesting about the crowdsourcing idea is that it represents a kind of open source without the openness: that is, participants are essentially computing drones with no way of knowing what the bigger picture is, unlike open source programmers, who can always look at the code. In a sense, then, crowdsourcing is a dilution of the idea at the heart of all the opens, but it's also a broadening in that it enfranchises more or less anybody with basic human processing abilities.

Update: And here's another crowdsourcing blog, called, aptly enough, Crowdsource.

Mashup Journalism

Open source journalism, also called citizen journalism, is nothing new, but I was intrigued to come across something called "SI journalism". This turns out to be re-using data gathered during the journalistic process to create mashups of one kind or another. The proposed name is "Structured Information Journalism", which has all the grace of a dodo in flight.

I'm not quite sure what it should be called - perhaps mashup journalism, which has a suitably tough, streetwise quality about it. Any other suggestions?

03 August 2006

Open Sourcing Nanotechnology

I came to this extensive paper on open source and nanotechnology rather circuitously, via LXer and a posting from the Foresight Nanotech Institute. This could hardly be more appropriate: it was Christine Peterson, president of the Institute, who actually coined the term "open source" on 3 February 1998.

The paper is almost as old - it dates back to 2000 - but it is a measure of how forward-thinking it was that it still seems very current, what with its talk of licensing, patent pools and anti-commons. I was particularly struck by this paragraph:

One of the somewhat counterintuitive arguments for open source is that it is safer than closed source. Reliability of complex systems, security against computer viruses and other attacks, and integrity of cryptographic secrecy in communications all benefit greatly from peer review and other key elements of open source development. These advantages may also apply to nanotechnology. Talking about open sourcing nanotechnology may evoke fears about giving easier access in the future to those who might abuse the technology. Both these issues make it important to discuss the relationship between open source and safety.

Which is a good point. Well-worth reading if you're at all interested in this fascinating if rather over-hyped field.

Not Your Father's Netscape Navigator

Web 2.0 sites like Digg are by the people, for the people: so can this quintessentially Diggness be bought? That's what Jason Calcanis is going to find out on the transmogrified Netscape Digg-alike site now that he has apparently snagged some Digg boys and girls to submit stories:

The word is getting out about the first 10 Netscape Navigators (people who took "the offer" to become paid bookmarkers). You can see their photos on the right hand column at www.netscape.com.

Here are the basic details, we hired:

1. Three of the top 12 DIGG users
2. The #1 user on Newsvine
3. The #1 user on Reddit
4. We hired a bunch of folks from Weblogs, Inc. (since we know and love them :-)

But as he himself points out:

It is important to note that this is all an experiment. No one knows for sure if this model of "paying people for work" us gonna work. I mean, it's crazy to think that people could be paid to do a job and do it with integrity--that's just crazy talk. :-)

Well, it's not so much the idea of paying people, Jason, that's the experiment; it's whether Digg's USP lies in the people submitting the stories or the ones doing the Digging.

Personally I think it's the latter - the community that builds up around a site; after all, people often submit the same story multiple times, so removing a few of the top (=fastest) posters will only slow things down slightly. But that's not to say that encouraging some defections to Netscape wasn't a shrewd move. It will certainly give the pages some meatier stories; the big question, though, is whether there are enough of the right people visiting Netscape who will bite.

Grokking Groklaw

I love interviewing people - which is a good job, since I did about 60 interviews when I wrote Rebel Code. Even today, I spend a lot of my time interviewing interesting people; of course, it's the "interesting" bit that's the hook.

I also like reading interviews - provided they are with similarly interesting people. Somebody who certainly falls into this category is Groklaw's Pamela Jones, who has done more than anyone to mobilise hoi polloi in the fight against SCO. As far as I can tell, she is rarely (if ever) interviewed, so kudos to Matthew Aslett for his recent Q & A session with her.

This is a Public Service Announcement

Well, you live and learn.

I'd been asking myself recently why my dinky Google ads down the right-hand side of this page had turned into ugly slabs of public service announcements (PSAs). Thanks to this article in the East Bay Express, I know why:

[I]n 2003, Google developed "sensitivity filters" to periodically scan the Web sites of its partners in search of violence, mature content, or other unacceptable material. "They detect sensitive content that we probably don't want to be showing advertising beside, and show public service announcements instead," says Shuman Ghosemajumder, Google's business product manager for trust and safety.

The concomitant loss of revenue worries me not a jot: basically, I earn enough per week from my Googly ads to buy myself a cup of coffee, if I'm lucky. What does worry me - as it does the original East Bay Express piece and Techdirt, is that it will have a stultifying effect on journalism, as titles and reporters avoid subjects that might trigger this advertising limbo.

Since I don't write much about violence or mature content, I must be pressing the "other unacceptable material" button - wicked things like criticising governments, large companies, existing and proposed legislation, that kind of stuff, I presume. Which means that PSAs on these pages are a badge of honour, a sign that I've hit home.

Don't Burn, Baby, Don't Burn

I have this vague feeling that I really ought to get excited about Rollyo, but for the life of me I can't think why I want to search a maximum of 25 sites: me, I like roaming through the odd billion, because you never know what you're going to find.

Nonetheless, this story on TechCrunch about Rollyo caught my eye for the following comment at the end:

The founder, Dave Pell, is a well known angel investor in Silicon Valley and could easily raise money for the company. But instead of looking for a large venture round of financing, he’s self funded Rollyo and has only one full time employee. By keeping the burn rate super-low, Rollyo can stay the course.

Absolutely, and I bet I know why he can keep that burn rate super-low: because he's running open source software - practically a given when it comes to Web 2.0 start-ups .

02 August 2006

Meshing with Meshes

I don't know why, but I'm a bit of sucker when it comes to wireless meshes. So my curiosity was naturally piqued by Meraki. Based on an open source project, and named after an untranslatable Greek concept: what's not to like? (Via GigaOM.)

Will the US PTO Ever Learn?

Blackboard has announced

it has been issued a U.S. patent for technology used for internet-based education support systems and methods. The patent covers core technology relating to certain systems and methods involved in offering online education, including course management systems and enterprise e-Learning systems.

That's putting it mildly. If you waste your life reading the summary, kindly placed online in a reader-friendly format by Michael Feldstein, you will find to your utter gob-smacked amazement that Blackboard has essentially been granted a patent on the idea of logging on to a Web server and accessing pages that contain educational materials:

The user is provided with a web page comprising a plurality of course hyperlinks, each of the course hyperlinks associated with each course that the user has been enrolled either as an instructor or as a student. Selection of a course hyperlink will provide the user with a web page associated with the selected course; the web page having content hyperlinks and buttons to various content areas associated with the course.

It's about as broad and utterly ridiculous as granting a patent for the idea of accessing a Web page with a "plurality" of links on any particular subject. (I know, I know - somebody has probably applied for this too.)

Fortunately, the broader a patent, the easier it is to find prior art to drive a stake through its black(board) heart. And Moodle - an open source course management system, which is obviously seriously threatened by this idiotic US PTO decision - has compiled a wonderfully detailed history of online learning. It not only puts the boot into Blackboard's pathetic claims, but provides a useful resource in itself. It ends its long, long list of prior examples of online learning with the laconic:

2006, July - Blackboard announces Patent 6,988,138

With this patent Blackboard seem to be claiming they invented everything above.

How many of these stupid decisions will it take before somebody sorts out the US PTO?

Up to a Certain Point

Ian Murdock, the semi-eponymous creator of Debian, has a nicely provocative post that turns some conventional wisdom on its head. It's often said - sometimes by me - that the move towards Web-based apps makes the operating system on a user's PC increasingly irrelevant, which means that people might as well opt for GNU/Linux instead of Windows. But as Murdoch points out:

Of course, there’s a flip side to this: if the operating system is just a set of device drivers, wouldn’t you want the most extensive set? As far as Linux on the desktop has come in the past few years, it still lags Windows significantly in plug-and-play value.

I think he's right - up to a certain point. And that point is when GNU/Linux is good enough. You don't really need to have the absolutely spiffiest device drivers if the price you pay is lack of security and, well, price. We're not there yet, though, so maybe it would be a good idea to go easy on the device drivers argument for the moment....

Damascene Code

There's nothing quite like a Road to Damascus conversion when it comes to generating passionate advocates. Just as Saul the arch-oppressor became Saint Paul the arch-propagator, so Wind River, once the most vocal of GNU/Linux's opponents in the embedded space, has become one of its biggest supporters. Its latest move is the most dramatic: a donation of 300,000 lines of code to the Eclipse Foundation.

What this shows is that the move to openness, however much born of desperation in the face of GNU/Linux's ineluctable rise in the embedded systems market, has clearly worked, and that Wind River is now a True Believer.

Commons versus Commons

An interesting reflection on the West's habit of stealing from one commons to create another - often with the best of intentions.

Wikipedia Cornucopia

You wait ages for a bus, and then three arrive at once. And so it seems for articles on Wikipedia. After I commended the piece in The New Yorker yesterday, here's an even better one in The Atlantic - home of the original "Memex" article by Vannevar Bush, which prefigured so much of the Web and Wikipedia.

The Atlantic's piece is particularly good on the origins and history of Wikipedia. Indeed, I had vaguely contemplated writing a book about Wikipedia and related open content projects to go alongside Rebel Code and Digital Code of Life, but there doesn't seem much point now with all this material available online.

And I liked this meditation on how Wikipedia functions:

Wikipedia suggests a different theory of truth. Just think about the way we learn what words mean. Generally speaking, we do so by listening to other people (our parents, first). Since we want to communicate with them (after all, they feed us), we use the words in the same way they do. Wikipedia says judgments of truth and falsehood work the same way. The community decides that two plus two equals four the same way it decides what an apple is: by consensus. Yes, that means that if the community changes its mind and decides that two plus two equals five, then two plus two does equal five. The community isn’t likely to do such an absurd or useless thing, but it has the ability.

It also quotes the following striking idea:

[I]n June 2001, only six months after Wikipedia was founded, a Polish Wikipedian named Krzysztof Jasiutowicz made an arresting and remarkably forward-looking observation. The Internet, he mused, was nothing but a "global Wikipedia without the end-user editing facility."

Now there's a thought.

Open Geodata Made Easy

If you've ever wondered what open geodata is and what it has to do with the other opens, try this introduction to the field. Along the way it mentions something called FLOSS Foundations, which I'd never heard of. Despite its name, it's not an organisation for dentists.

01 August 2006

Foxed by Foxmarks

Mitch Kapor, he of software archaeology fame, has started a project called Foxmarks. According to the FAQ:

Foxmarks is an extension for Firefox that allows you to synchronize your bookmarks across multiple computers. Install Foxmarks on each machine that you want to keep synchronized, and Foxmarks will automatically propagate bookmarks changes that you make on one machine to all the others.

Hm: isn't this what Google Browser Sync does? And then some:

Google Browser Sync for Firefox is an extension that continuously synchronizes your browser settings – including bookmarks, history, persistent cookies, and saved passwords – across your computers. It also allows you to restore open tabs and windows across different machines and browser sessions.

But wait, Mitch says there's more:

I’m incubating a new startup, which is pretty exciting because we’re working on innovation at the intersection of search and social production. Think of new services which are a cross between Google and the Wikipedia.

BTW, Mitch, how is Chandler coming along? (Via C|net.)

Bio::Blogs #2...

...is now online. Not that I'm represented there or anything, oh my word, no. Well, only a bit. You might want to visit it anyway, since it comes all the way from sunny Brisbane.

The Quiet Revolution

One of the extraordinary things about Firefox is that its impact just keeps growing: the 200 million download mark has now been passed. Of course, this doesn't mean 200 million users, but the market share is also going up respectably.

This is all quite amazing - particularly because nobody seems to think it's amazing anymore. We expect it from Firefox, and that's good, because it helps seed the idea that open source in general should be aiming this high.

It's the Metric, Stupid

A great post by Stephen O'Grady pondering the likelihood or otherwise of billion-dollar open source companies appearing anytime soon. It contains a number of wise comments that make it well worth reading. For example:

So you look a little deeper and see that while open source might not (yet) create immense, monolithic wealth, it does benefit customers by lowering pricing and increasing choice. Further, it seems illogical to believe that even if open source can lower certain software acqusition and operating costs, those dollar savings are not invested elsewhere. How many CIOs will go their board and say "I invested in Linux, JBoss, & MySQL and saved us x dollars - please lower my budget accordingly"? You might also see that open source allows vendors to ammortize a number of traditional development, quality assurance and marketing costs, across a wide pool of volunteer resources, lowering the dollars they need to operate (you should hear Alfresco's Kevin Cochrane talk about the delta in saleperson costs - it's eye opening).

Quite. But I would go much further.

The reason we will probably never see a billion-dollar open source company is the fact that turnover is the wrong metric to focus on for such entities. Looking purely at income misses out on all the other kinds of value that are involved - for example, all the software that is downloaded and used by people who aren't paying customers. It excludes the value added to the open source ecosystem in terms of helping other free software projects, either directly through code re-use, or indirectly by promoting the overall concept.

These are all things that open source companies do routinely, and yet they receive next to no credit for it - financial or otherwise. It's part of a wider problem with current economics analyses that also typically don't take into account factors like environmental damage when estimating costs of production.

And at a deeper level still, there is something that O'Grady himself touches on:

Open source in many respects seems to underpin a future in which more people will make less, rather than less people making more. I know which I'd pick.

Focussing only on the money involved completely overlooks other crucial elements of free software: the social and ethical aspects. It's good to see that O'Grady is one of the people who gets this.

Update: Apparently, Matt Asay disagrees with O'Grady (and hence me).

Lies, Damned Lies and Baltimore Sun Op-Eds

This op-ed on net neutrality in the Baltimore Sun is extraordinary:

The "neutral" proposal that companies like Google are touting will ensure that they never have to pay a dime no matter how much bandwidth they use, and consumers who may only use their computers to send e-mail and play Solitaire get to foot the bill.

Er, that is, companies like Google never have to pay a dime apart from the millions of dollars in connection fees that they cough up each year? Well, that's an interesting use of the word "never", as in "always".

I'd like to put this statement down to sheer stupidity, but, alas, I fear that it may be due to the fact that the authors are co-chairs of the "Hands Off the Internet" pressure group, which, by an amazing coincidence just happens to be funded by the big telco companies who are trying to kill net neutrality.

Pathetic. (Via TechDirt.)

We Are All Great Communicators Now

Tom Foremski has a thought-provoking post about the Internet's disruptive effects. More specifically, he asks: Where are they? His answer - that the real disruption is happening in the media sector - is a good one, but incomplete, I think.

He rightly observes that

every company is a media company to a greater or lesser degree. Because every company tells stories, it publishes to its customers, to its staff, to its new hires. We now have two-way media technologies and those that can adapt and master those technologies, and become technology-enabled media companies, will survive.

But this is not about publishing, which is essentially unidirectional (however much it may pay lip-service to the idea of listening to readers): it is about communication, which is truly two-way. And that is the key, disruptive effect of the Internet: it is forcing all companies to communicate with their customers - to speak and to listen - not just publish to them.

That is why so-called social networking lies at the heart of Web 2.0 technologies, and why integrating such egalitarian principles into their business is going to be so hard for most companies, given their natural penchant for a more seigneurial command and control approach.

The Politics of Knowledge and the Online Republic

Digital Universe's Larry Sanger has posted another of his thoughtful essays, this time on the central issue of the "politics of knowledge":

[T]he main arena of the new politics of knowledge is project governance. Wikipedia is famously unaccommodating of the usual privileges of experts; there is no special place them in Wikipedia-land. You might arrive at Slashdot possessed of the finest-tuned understanding of tech news, but when you join in making and rating comments, you become just another rank-and-file member until, perhaps, you prove yourself by the lights of Slashdot (which might or might not correspond to anything deserving the name “expertise”). On Digg, your vote counts the same as everybody else’s. And so forth.

So radical egalitarianism is built into the governance models of many collaborative projects. When, therefore, the Creative Public votes with their feet for Web 2.0 resources that reject the need of editors, such as Wikipedia, Digg, and MySpace (again, the latter may not be collaborative, but it’s definitely editor-free), they are thereby denying epistemic authority to the people who otherwise would be their editors.

Along the way, he introduces the idea of the "online republic":

Let me explain rather better what I mean by an “online Republic,” and then why I think that it is the only system that will have desirable epistemic consequences, in the long run. Bear in mind first that Republics have a definite democratic aspect, since power and authority in the project in actual practice (not just in the PR material) must emanate from the participants–not from the website owners. But not just anyone can count as a participant for voting purposes. Insofar as we are talking about an online polity that is shaped not by just anyone’s arbitrary whim but by “law,” there must be a process whereby someone becomes a member of the community and thus subject to its “laws.” In practical terms, this means no doubt that “full citizenship” must be earned through participation and through a declaration not to undermine at least the fundamental laws of the polity (i.e., engage in “insurrection”). The “fundamental laws” are essentially a community charter, which is carefully written, carefully interpreted, and, once established, very hard to change. The rule of law arguably requires a robust, well-respected constitution: if laws are very easy to change, legislators and judges can, with a flick of the pen, change the entire system into something else entirely. Finally, a Republic requires the free election of representatives, the basic qualifications of which (if any) are described by the charter, who both make and enforce the rules of the project.

His essay also has some handy links to other relevant materials, including this New Yorker feature on Wikipedia, currently the best overall introduction to the project and its history.

31 July 2006

A Noteworthy Addition: Lotus Notes for GNU/Linux

For some, the words "Lotus Notes" are enough to strike fear into the heart. But for younger readers, that resonance is probably absent, and so the importance of the recent port of the Lotus Notes client to GNU/Linux is probably lost.

In a sense, Lotus Notes for GNU/Linux is noteworthy precisely because the program is the epitome of corporate computing, with all that this implies. Its appearance is further proof that GNU/Linux has arrived. It also removes yet another obstacle to adopting free software in a business context for some 120 million people currently using the program on other platforms - whether willingly or not.

UK PubMed Central: Good News, Bad News?

The US PubMed Central service has become one of the cornerstones of biomedical research, and a major milestone on the way towards full open access to all scientific knowledge.

Just as the world's central genomic database GenBank exists in three global zones - the US, Europe and Japan - so the natural step would be to roll out PubMed Central as an international service. The first move towards that has now been made with the announcement that a consortium of UK institutions has been chosen to set up UK PubMed Central (UKPMC). That's the good news. The bad news - maybe - is that one of them is the British Library.

Why is that bad news, since the British Library is one of the pre-eminent libraries in the world? Well, that may be so, but it is also deeply involved with Microsoft's Open XML, the rival to OpenDocument Format; Microsoft is trying to push Open XML through a standardisation process to match ODF's full ISO status. It is particularly regrettable that the British Library is bolstering this pseudo-standard with its support, rather than wholeheartedly backing ODF, a totally open, vendor-independent standard, and this could be real problem because of the British Library's role in the UKPMC consortium:

In the initial stages of the UKPMC programme, the British Library will lead on setting up the service, developing the process for handling author submissions and marketing the resource to the research community.

It's the "handling authors submissions" that could be bad news: if, for example, the British Library gave any preference for submissions be made in Microsoft's XML format formats, it would be a huge step back for openness. The US PubMed Central does the Right Thing, and takes submissions in either XML or SGML. Let's hope the UK PubMed Central follows suit and goes for a neutral submissions policy. (Via Open Access News.)

Moguls of New Media, Moguls of Old Media

The Wall Street Journal has a nice piece about what it calls the "moguls of new media":


As videos, blogs and Web pages created by amateurs remake the entertainment landscape, unknown directors, writers and producers are being catapulted into positions of enormous influence. Each week, about a half-million people download a comedic video podcast featuring a former paralegal. A video by a 30-year-old comedian from Cleveland has now been watched by almost 30 million people, roughly the audience for an average "American Idol" episode. The most popular contributor to the photo site Flickr.com just got a contract to shoot a Toyota ad campaign.

What I like about this WSJ feature is that it shows clearly the difference between the new media it celebrates and the old media it represents. The WSJ piece is well written, well edited and full of well-researched facts. Rather unlike new media, which tends to be scrappy and light on substance. But then, that's its charm, just as the reason the WSJ will always have a role, even when new media becomes even more pervasive and even more successful, is because it will never be any of these things. (Via Slashdot.)

CNN's Citizen Media is the Message

The news that CNN is now soliciting user-generated stories and content - rather as the BBC has been doing for a while - is important not so much for what will result, but for the message it sends. Even if the user-generated content turns out to be nugatory, the fact that CNN is jumping on this bandwagon gives the latter more impetus, which can't be a bad thing in terms of re-inventing media.

Brazil: Next to Go Nuts for ODF?

Judging by this article, Brazil's federal government may well be the next to adopt ODF as its official standard. As the news item notes, adopting open source is all very well, but if your documents are still locked into proprietary formats like Microsoft Office, you're only half-done.

The great thing about these announcements is that there's a positive feedback loop: the more that are made, the more other governments feel safe in following suit, which boosts the process even more. (Via Erwin's StarOffice Tango.)

Gold Digg-ing

The news that someone is offering their Digg profile on eBay is hardly a surprise in these days when people will try to sell anything there; but it's nonetheless significant. Digg is one of the leading Web 2.0 sites, and a leading exponent of the power of social networks. What can be done with Digg can be applied elsewhere.

This will lead to a de-coupling between the person who creates the online account in these networks and the account itself, which can be sold to and used by others. Which raises the question: wherein lies the value of that account? If the person who created it - and whose social "value" it reflects - moves on, what then of that value? In effect, the account becomes more of a brand, with certain assumed properties that can be lost as easily as they were gained if the new owner fails to maintain them.