29 August 2007

The Value of Free

Nothing new here for readers of this blog, but good to see others moving in the same direction:


I believe we should consider anything we publish on the web as an advertisement: promotion material, and only that. We can use this to sell the following:

* pretty or convenient copies (maybe we will see the reappearance of the artful music album!)
* signed copies
* limited edition high quality copies (things one can proudly display on a wall at home)
* time: live performances!

Blogging Open Stack Integration

One of the great but rather submerged stories in the open source world is stack integration. With the exception of the LAMP stack, free software solutions have been rather fragmented, with little inter-project coordination. One important development in this space is the creation of the Open Solutions Alliance, whose main task is ensuring better cooperation between disparate products.

I wrote about this recently, and I notice that the OSA blog is quite active at the moment. It's a good place to find out what exactly is happening in this important but neglected area.

I'm Back....

...be very afraid.

20 August 2007

Radio Silence

For anyone that cares - well, there might be someone - Radio Opendotdotdot is falling silent for a few days. Back soon.

Oh, Tell Me the Truth about OOXML

The ODF vs. OOXML battle is really hotting up - a sure sign that this is important. One of the key issues is whether OOXML can ever be fully implemented by anyone else other than Microsoft: if it can't, then it can hardly be called a true open standard. Here's some analysis that suggest is can't. Not that will stop it becoming one....

17 August 2007

Putting some (Source)Fire under ClamAV

The open source anti-virus software project ClamAV is one of my favourite pieces of free code. I've used it for years now, and recommended it to dozens of people. But I've always been a bit worried about its business model: could it continue to grow?

Well, now it looks like it can, since Sourcefire, creator of SNORT, has acquired the project:


With nearly 1 million unique IP addresses downloading ClamAV malware updates daily across more than 120 mirrors in 38 countries, ClamAV is one of the most broadly adopted open source security projects worldwide. ClamAV has also been recognized as comparable in quality and coverage to leading commercial anti-virus solutions. Most recently, at LinuxWorld this year, ClamAV was one of only three anti-virus technologies to provide a 100% detection rate in their live 'Fight Club' test featuring live submissions from the show audience.

Under terms of the transaction, Sourcefire has acquired the ClamAV project and related trademarks, as well as the copyrights held by the five principal members of the ClamAV team including project founder Tomasz Kojm. Sourcefire will also assume control of the open source ClamAV project including the ClamAV.org domain, web site and web site content and the ClamAV Sourceforge project page. In addition, the ClamAV team will remain dedicated to the project as Sourcefire employees, continuing their management of the project on a day-to-day basis.

As the above points out, ClamAV was one of only three anti-virus technologies to provide a 100% detection rate, and this only reinforces my confidence is using it day-in, day-out. If you don't know it, do take a look. (Via Matthew Aslett.)

16 August 2007

Paying the Price of Intellectual Monopolies

Oh look, here's unnecessary, Draconian intellectual monopoly regulation that has major negative consequences:


An unexpected implication in the legislating procedure of the proposed EU Directive on criminal measures aimed at ensuring the enforcement of intellectual property rights (IPRED2) puts legitimate businesses under clear threat of criminal sanctions.

Now, why am I not surprised by this?

The Triumph of Free (as in Beer)

With The New York Times and The Wall Street Journal said to be looking at removing the “pay wall” around their online content, and others – including CNN, Google and AOL – having already done so, one question springs to mind: Are we seeing the death of paid content online, and the return of free as a business model?

Yup - at least, free as in beer: now we need to work on the free as in freedom part.

Google Health...

...is coming. And you thought the privacy issues of using Google were bad now. (Via John Battelle.)

Not So Au Courant

This piece from The Courant is like the coelacanth: not very pretty, but fascinating for its atavistic traits:

Unlike copyright-protected software, such as Microsoft's Windows, open source software is available either as a free public-domain offering or under a nominal licensing fee.

Well, no. To be strictly open source, software must have an OSI-approved licence. Such licences generally (always?) depend on copyright law for their enforcement. So, by definition, open source software uses copyright as much as Microsoft's Windows, just for different ends.

This was a common confusion when free software started appearing in the mainstream, but it's quite surprising to see it popping up nowadays.

Of Open and Closed Geography

Talking of the price we pay for idiotically closed geographical data:

The United States has benefitted in many ways from having public data sets that are freely used by scholars, commercial firms, consultants, and the public. An example of this is the TIGER system (Topologically Integrated Geographic Encoding and Referencing system http://www.census.gov/geo/www/tiger/) Many countries do not, and one British geospatial expert estimated that the closed nature of their system has cost them one billion pounds in lost business.

(Via Open Access News.)

The Idiots of OS (Ordnance Survey)

This really makes my blood boil.

After a year of negotiations, academic geographers have conceded defeat in their attempt to find a way to make a pioneering 3D representation of the capital, Virtual London, available to all comers via the Google Earth online map.

Followers of Technology Guardian's Free Our Data campaign will have guessed the reason: Virtual London is partly derived from proprietary data owned by the government through its state-owned mapping agency, Ordnance Survey (OS). What makes the situation bizarre is that Virtual London's development was funded by another arm of the government, the office of the mayor of London.

In other words, I helped pay for this information, twice - as taxpayer, and as London ratepayer - and yet I am not allowed to access it.

The Ordnance Survey's excuse is pathetic:

OS said granting Google special terms for Virtual London would be unfair on other licensees. "We provide an open, fair and transparent set of terms for providers seeking to operate in the same commercial space as each other. We cannot therefore license Google in a different way to other providers. We are completely supportive of anyone putting our data on the web as long as they have a licence to do so." Google would not comment.

Commercial space - and what about the public space, you know those tiresome little people that pay for your salary?

Thanks for nothing.

Open Source's Best-Kept Secret Redux

About 18 months ago, I wrote a post called "Open Source's Best-Kept Secret" about Eclipse, how wonderful it was, and yet how few knew about it. Now what do I find?

Eclipse may be the most important open-source "project" that people outside the industry, and even some within it, have never heard of.

Yup, Matt and I agree again. His piece is an excellent interview with the head of Eclipse, Mike Milinkovich. I also interviewed him recently, for my feature about the open source ecosystem in Redmond Magazine. Matt's ranges more widely, and is probably the best intro to what Eclipse is up to, how it functions, and why it is so important.

Indeed, I wonder whether it will actually prove to be the most important open source project of all in the long term. As Matt points out:

In late June, Eclipse made available the largest-ever simultaneous release of open-source software, called Europa: 17 million lines of code, representing the contributions of 310 open-source developers in 19 countries. Twenty-one new tools were included in the "Europa" release, all free to download.

Think about that. The Linux kernel has around 6 million lines of code.... The Java Development Kit that Sun open sourced has 6.5 million.... Sun's StarOffice release in 2000 (which was believed to be the largest open-source release to that point) had 9 million.... Firefox has 2.5 million.

Yeah, think about it....

15 August 2007

They Gave Me of the Fruit...

...and I did eat:

But now an international scientific counterculture is emerging. Often referred to as “open science,” this growing movement proposes that we err on the side of collaboration and sharing. That’s especially true when it comes to creating and using the basic scientific tools needed both for downstream innovation and for solving broader human problems.

Open science proposes changing the culture without destroying the creative tension between the two ends of the science-for-innovation rope. And it predicts that the payoff – to human knowledge and to the economies of knowledge-intensive countries like Canada – will be much greater than any loss, by leveraging knowledge to everyone’s benefit.

"Sharing the fruits of science", it's called. Nothing new, but interesting for the outsider's viewpoint. (Via Open Access News.)

Welcome to the Era of Personal Genomics

I've been wittering on about personal genomics for some time: well, it's here, people. If you don't believe, me, take a look at this site (note, it's one of those old-fashioned FTP thingies, but Firefox should cope just fine).

Not much to see, you say? Just a couple of boring old directories - one called "Venter", the other "Watson". And inside those directories, lots of pretty massive files - some 35 Mbytes, some double that. And inside those files? Oh, just some boring letters; you know the kind of thing - AAGTGGTACCATTGACGCACAGGACACAGTG etc.

Nothing much: just the essence of the first two people to have their entire genomes (or nearly) sequenced - and all made freely available.... (Via Discovering Biology in a Digital World.)

O'Reilly? I Think Not

Once again, Matt gets it, and Tim doesn't:

"I will predict that virtually every open source company (including Red Hat) will eventually be acquired by a big proprietary software company."

Thus spake Tim O'Reilly in the comments to one of his other posts. Tim believes that open source, at least as defined by open-source licensing, has a short shelf-life that will be consumed by Web 2.0 (i.e., web companies hijacking open-source software to deliver proprietary web services) or by traditional proprietary software vendors.

In other words, why don't I just give up, sell out, and go home? I guess I would if I thought that Tim were right. He's not, not in this instance.

There's something more fundamental going on here than "Proprietary software meets open source. Proprietary software decides to commandeer open source. Open source proves to be a nice lapdog to proprietary software." I actually believe that open source, not proprietary software, is the natural state of the industry, and that Tim's proprietary world is anomalous.

I particularly liked this distinction between the service aspects of software, and the attempts to view it as an instantiation of various intellectual monopolies:

Suddenly, the license matters more, not less, because it is the license that ensures the conversation focuses on the right topic - service - rather than on inane jabberings that only vendors care about. You know, like intellectual property.

And there's another crucial reason why proprietary software companies can't just open their chequebooks and acquire those pesky open source upstarts. Unlike companies who seem to think that they are co-extensive with the intellectual monopolies they foist on customers, open source outfits know they are defined by the high-quality people - both employees and those out in the community - that code for the customers.

For example, one reason people take out subscriptions to Red Hat's offerings is that they get to stand in line for the use of Alan Cox's brain. Imagine, now, that proprietary company X "buys" Red Hat: well, what exactly does it buy? Certainly not Alan Cox's brain, which will leave with him (one hopes) when he moves immediately to another open source company (or just hacks away in Wales for pleasure). Sure, the purchaser will have all kinds of impressive legal documents spelling out what it "owns" - but precious little to offer customers anymore, who are likely to follow wherever Alan Cox and his ilk go.

The (Uncommon) Fedora Commons

When I first heard about Fedora Commons I naively assumed it had something to do with the Linux distro Fedora, but I was wrong:

Fedora Commons is a non-profit organization providing sustainable technologies to create, manage, publish, share and preserve digital content as a basis for intellectual, organizational, scientific and cultural heritage by bringing two communities together.

Communities of practice that include scholars, artists, educators, Web innovators, publishers, scientists, librarians, archivists, publishers, records managers, museum curators or anyone who presents, accesses, or preserves digital content.

Software developers who work on the cutting edge of open source Web and enterprise content technologies to ensure that collaboratively created knowledge is available now and in the future.

Fedora Commons is the home of the unique Fedora open source software, a robust integrated repository-centered platform that enables the storage, access and management of virtually any kind of digital content.

So not only is Fedora an organisation - recently funded to the tune of $4.9 million by the Gordon and Betty Moore Foundation - aiming to create a commons of "intellectual, organizational, scientific and cultural heritage", but it is also a piece of code:

Institutions and organizations face increasing demands to deliver rich digital content. A scan of the web reveals complex multi-media content that combines text, images, audio, and video. Much of this content is produced dynamically through the use of servlet technology and distributed web services.

Delivery of rich content is possible through a variety of technologies. But, delivery is only one aspect of a suite of content management tasks. Content needs to be created, ingested, and stored. It needs to be aggregated and organized in collections. It must be described with metadata. It must be available for reuse and refactoring. And, finally, it must be preserved.

Without some form of standardization, the costs of such management tasks become prohibitive. Content managers find themselves jury-rigging tasks onto each new form of digital content. In the end, they are faced with a maze of specialized tools, repositories, formats, and services that must be upgraded and integrated over time.

Content managers need a flexible content repository system that allows them to uniformly store, manage, and deliver all their existing content and that will accommodate new forms that will inevitably arise in the future.

Fedora is an open source digital repository system that meets these challenges.

In fact, Fedora is nothing less than "Flexible Extensible Digital Object Repository Architecture". So the name is logical - pity it's so confusing in the context of open source.

Linux Weather Forecast

Get your umbrellas out for the Linux Weather Forecast:

The need for a Linux Weather Forecast arises out of Linux’s unique development model. With proprietary software, product managers define a “roadmap” they deliver to engineers to implement, based on their assessments of what users want, generally gleaned from interactions with a few customers. While these roadmaps are publicly available, they are frequently not what actually gets technically implemented and are often delivered far later than the optimistic timeframes promised by proprietary companies.

Conversely, in Linux and open source software, users contribute directly to the software, setting the direction with their contributions. These changes can quickly get added to the mainline kernel and other critical packages, depending on quality and usefulness. This quick feedback and development cycle results in fast software iterations and rapid feature innovation. A new kernel version is generally released every three months, new desktop distributions every six months, and new enterprise distributions every 18 months.

While the forecast is not a roadmap or centralized planning tool, the Linux Weather Forecast gives users, ISVs, partners and developers a chance to track major developments in Linux and adjust their business accordingly, without having to comb through mailing lists of the thousands of developers currently contributing to Linux. Through the Linux Weather Forecast, users and ecosystem members can track the amazing innovation occurring in the Linux community. This pace of software innovation is unmatched in the history of operating systems. The Linux Weather Forecast will help disseminate the right information to the ever growing audience of Linux developers and users in the server, desktop and mobile areas of computing, and will complement existing information available from distributions in those areas.

Good to see Jonathan Corbet, editor of LWN.net, for whom I write occasionally, spreading some of his deep kernelly knowledge in this way.

14 August 2007

RSS as the Lubricant of Openness

Facebook creaks open a little more - and RSS is the lubricant. (Via TechCrunch.)

GiveMeaning? - Give Me a Break

I wrote recently about the plight of the Tibetan people. One of the problems is that it is hard for an average non-Tibetan to do much to help the situation. So I was pleased that Boing Boing pointed me to what sounded a worthy cause that might, even if in a small way, help preserve Tibetan culture:

The Tibetan Endangered Music Project has so far recorded about 400 endangered traditional Tibetan songs. We now have the opportunity to make these songs available online, at a leading Tibetan language website (www.tibettl.com). However, this volunteer run website is unable to fund hosting for our material. The cost of hosting space is 1.5 RMB (less than 20 US cents) for every MB. One song in mp3 format is approximately 1.5 MB. 1900 USD would allow us to buy 10 GB of hosting space, which will take care of all our needs for the forseeable future (allowing 6700 1.5 MB songs to be uploaded). It would also allow us to expand to video hosting in the future, or to provide high quality (.wav) formats instead of only compressed mp3 format.

Wow - preserving the Tibetan musical commons for the Tibetans: sign me up, I thought.

So I did sign up. But that's where the problem began.

Despite being signed up and in, I could not - can not - find anywhere to give money to this lot. Now, naively, I would have thought that a site called GiveMeaning, expressly designed to help people give money to worthy causes, would, er, you know, help people give money, maybe with a big button saying "GIVE NOW". But what do I know? I've only been using the Web for about 14 years, so maybe I'm still a little wet behind the ears.

On the other hand, it could just be that this is one of the most stupid sites in the known universe, designed to drive altruists mad as a punishment for wanting to help others. Either way, it looks like the Tibetan musical commons is going to have to do without my support, which is a pity.

Why Openness Matters - Doubly

Here's a great demonstration of why openness is so important.

Wikipedia is famously open, so in general anyone can edit stuff. But this editing is also done in the open, in that all changes are tracked. Now, some people edit anonymously, but their IP addresses are logged. This information too is freely available, so here's an idea that some bright chap had:

Griffith thus downloaded the entire encyclopedia, isolating the XML-based records of anonymous changes and IP addresses. He then correlated those IP addresses with public net-address lookup services such as ARIN, as well as private domain-name data provided by IP2Location.com.

The result: A database of 5.3 million edits, performed by 2.6 million organizations or individuals ranging from the CIA to Microsoft to Congressional offices, now linked to the edits they or someone at their organization's net address has made.

As a result, dedicated crowd-sourcers are poring over Wikipedia, digging out those embarrassing self-edits. For example:

On Christmas Eve 2004, a Disney user deleted a citation on the "digital rights management" page to DRM critic Cory Doctorow along with a link to a speech he gave to Microsoft's Research Group on the subject. Later, a Disney user altered the "opponents" discussion of the entry, arguing that consumers embrace DRM: "In general, consumers knowingly enter into the arrangement where they are granted limited use of the content."

or:

"Removed ECHELON link, irrelevant to article," reads the comment explaining this cut. The contributor's IP address belongs to the National Security Agency.

or even:

Microsoft's MSN Search is now "a major competitor to Google". Take it from this anonymous contributor, whose IP address belongs to Waggener Edstrom, Microsoft's PR firm.

Now that's what I call openness.

Amazon Goes Lulu

I'm a big fan of Lulu.com, the self-publishing company, not least because the man behind it, Bob Young, also co-founded Red Hat, and is one of the most passionate defenders of the open source way I have come across.

So the news that CreateSpace is going into the on-demand publishing business is interesting - especially since the company is a subsidiary of Amazon, which means that self-published authors will be able to hitch a ride on the Amazon behemoth. But as far as I can tell, Lulu still offers a more thorough vision, with its global reach and finer-grained publishing options. But if nothing else, Amazon's entry into this space will serve to validate the whole idea in the eyes of doubters.

Google Books: A Cautionary Tale

Google Books is important:

the Google Project has, however unintentionally, made not only conventional libraries themselves, but other projects digitizing cultural artifacts appear inept or inadequate. Project Gutenberg and its 17,000 books in ascii appear insignificant and superfluous beside the millions of books that Google is contemplating. So do most scanning projects by conventional libraries. As a consequence of the assumed superiority of Google’s approach, therefore, it is highly unlikely that either the funds or the energies for an alternative project of similar magnitude will become available, nor are the libraries who are lending their books (at significant costs to their funds, their books, and their users) likely to undertake such an effort a second time. With each scanned page, Google Books’ Library Project, by its quantity if not necessarily by its quality, makes the possibility of a better alternative unlikely. The Project may then become the library of the future, whatever its quality, by default. So it does seems important to probe what kind of quality Google Book Project might present to an ordinary user that Google envisages wanting to find a book.

But also unsatisfactory:

The Google Books Project is no doubt an important, in many ways invaluable, project. It is also, on the brief evidence given here, a highly problematic one. Relying on the power of its search tools, Google has ignored elemental metadata, such as volume numbers. The quality of its scanning (and so we may presume its searching) is at times completely inadequate. The editions offered (by search or by sale) are, at best, regrettable.

Rather worrying. (Via O'Reilly Radar.)

Microsoft Bends its Knee to the OSI

So, Microsoft has finally done it, and submitted two of its licences to the OSI for approval. Here's my earlier analysis of what's going on here.

A Public Enquiry into the Public Domain

The public domain is a vastly underappreciated resource - which probably explains why there have been so many successful assaults on it in recent years through copyright, patent and trademark extensions. But now, it seems, people are starting to wake up to its central importance for the digital world:

The new tools of the information society make that public domain material has a considerable potential for re-use - by citizens or for new creative expressions (e.g. documentaries, services for tourism, learning material). It contains published works, such as literary or artistic works, music and audiovisual material for which copyright has expired, material that has been assigned to the public domain by the right holders or by law, mathematical methods, algorithms, methods of presenting information and raw data, such as facts and numbers. A rich public domain has, logically, the potential to stimulate the further development of the information society. It would provide creators – e.g. documentary makers, musicians, multimedia producers, but also schoolchildren doing a Web project – with raw material that they can build on and experiment with, without high transaction or other costs. This is particularly important in the digital context, where the integration of existing material has become much easier.

Although there is some evidence of its importance, there has been no systematic attempt to map or measure its social and economic impact. This is a problem when addressing policy issues that build on public domain material (e.g. digital libraries) or that have an impact on the public domain (e.g. discussions on intellectual property instruments) in the digital age.

The European Union aims to remedy this lack with a study:

Call for tender: "Assessment of the Economic and Social impact of the Public Domain in the Information Society" was published today in the Supplement to the Official Journal of the European Union 2007/S 151-187363. The envisaged purpose of the assessment is to analyse the economic and social impact of the public domain and to gauge its potential to contribute for the benefit of the citizens and the economy.

Portuguese Ministry of Education Goes Free

The Portuguese Ministry of Education is doing the sensible thing and giving away a CD full of free (Windows) software to 1.6 million students, saving itself (and the taxpayers) around 300 million Euros. Nothing amazing about that, perhaps, since it's a sensible thing to do (not that everyone does it).

What's more interesting, for me, at least, is the set of software included on the CD:

* OpenOffice.org
* Firefox
* Thunderbird
* NVU
* Inkscape
* GIMP

These are pretty much the cream of the free software world, and show the increasing depths of desktop apps. Also interesting are the specifically educational programs included:

* Freemind and CmapTools
* Celestia
* Geogebra
* JMOL
* Modellus

Some of these were new to me, notably Geogebra:
GeoGebra is a free and multi-platform dynamic mathematics software for schools that joins geometry, algebra and calculus.

and Modellus (which isn't actually free software, just free):

Modellus enables students and teachers (high school and college) to use mathematics to create or explore models interactively.

It's always surprised me that that more use isn't made of free software in education, since the benefits are obvious: by pooling efforts, duplication is eliminated, and the quality of tools improved. (Via Erwin Tenhumberg.)

13 August 2007

Red Hat Meets Eclipse

Here's an interesting example of major open source projects meeting to produce a highly-targeted commercial product:

Red Hat Developer Studio is a set of eclipse-based development tools that are pre-configured for JBoss Enterprise Middleware Platforms and Red Hat Enterprise Linux. Developers are not required to use Red Hat Developer Studio to develop on JBoss Enterprise Middleware and/or Red Hat Linux. But, many find these pre-configured tools offer significant time-savings and value, making them more productive and speeding time to deployment.

Google's Gift of Taking

Absolutely:

It's not often that Google kills off one of its services, especially one which was announced with much fanfare at a big mainstream event like CES 2006. Yet Google Video's commercial aspirations have indeed been terminated: the company has announced that it will no longer be selling video content on the site. The news isn't all that surprising, given that Google's commercial video efforts were launched in rather poor shape and never managed to take off. The service seemed to only make the news when embarrassing things happened.

Yet now Google Video has given us a gift—a "proof of concept" in the form of yet another argument against DRM—and an argument for more reasonable laws governing copyright controls. How could Google's failure be our gain? Simple. By picking up its marbles and going home, Google just demonstrated how completely bizarre and anti-consumer DRM technology can be. Most importantly, by pulling the plug on the service, Google proved why consumers have to be allowed to circumvent copy controls.

12 August 2007

The Real Spectrum Commons

I have referred to radio spectrum as a commons several times in this blog. But there's a problem: since spectrum seems to be rivalrous - if I have it, you can't - this means that the threat of a tragedy of the commons has to be met by regulation. And that, as we see, is often unsatisfactory, not least because powerful companies usually get the lion's share.

But it seems - luckily - I was wrong about spectrum necessarily being rivalrous:

Software defined radio that is beginning to emerge from the labs into actual tests has the ability to render all spectrum management moot. Small wonder that the legal mandarins there have begun to sneer that open source SDR cannot be trusted.

In other words, when you make radio truly digital, it can be intelligent, and simply avoid the problem of commons over-use.

11 August 2007

Irony in the Blood

Well spotted:


To recap:

1. In all likelihood, fossil fuel emissions are one of the primary causes of global warming;

2. global warming has melted the Arctic ice cap faster than any time on record; so

3. Russia, Denmark, Canada, and the United States are racing to make a no-more-land grab in the Arctic; in order to

4. claim fossil fuel drilling rights for the Arctic seabed.

Middle Kingdom Patently on the Way to the Top

This could have interesting repercussions:

China has seen a sharp increase in requests for patents, according to the UN's intellectual property agency.

The number of requests for patents in China grew by 33% in 2005 compared with the previous year.

That gives it the world's third highest number behind Japan and the United States, the agency said.

Why is this important? Well, currently, patents are being pushed largely by the US as a way of asserting itself economically, notably against that naughty China, which, it is frequently claimed, just rips off the West's ideas. But as China becomes one of the world's leading holders of patents, we can expect to see it start asserting those against everyone else - including the US. Which might suddenly find that it is not quite so keen on those unfair intellectual monopolies after all....

SCO KO'd, Novell Renewed

Well, we all knew it would happen, and, finally, it has:

Judge Dale Kimball has issued a 102-page ruling [PDF] on the numerous summary judgment motions in SCO v. Novell. Here it is as text. Here is what matters most:

[T]he court concludes that Novell is the owner of the UNIX and UnixWare Copyrights.

That's Aaaaall, Folks! The court also ruled that "SCO is obligated to recognize Novell's waiver of SCO's claims against IBM and Sequent". That's the ball game. There are a couple of loose ends, but the big picture is, SCO lost. Oh, and it owes Novell a lot of money from the Microsoft and Sun licenses.

But there's another interesting aspect to this: SCO lost, and Novell won:

But we must say thank you to Novell and especially to its legal team for the incredible work they have done. I know it's not technically over and there will be more to slog through, but they won what matters most, and it's been a plum pleasin' pleasure watching you work. The entire FOSS community thanks you for your skill and all the hard work and thanks go to Novell for being willing to see this through.

As I've written elsewhere, we really can't let Novell fail, whatever silliness it gets up to with Microsoft: it is simply too important for these kinds of historical reasons.

Update: Here's some nice analysis of the implications.

10 August 2007

The Liability of Closed Source Software

It's a pity that reports from the House of Lord's Science and Technology Committee are so long, because they contain buckets of good stuff - not least because they draw on top experts. A case in point is the most recent, looking at personal Internet security, which includes luminaries such as Bruce Schneier and Alan Cox.

The recommendations are a bit of a mixed bag, but one thing that caught my eye was in the context of making suppliers liable for their software. As Bruce puts it:

“We are paying, as individuals, as corporations, for bad security of products”—by which payment he meant not only the cost of losing data, but the costs of additional security products such as firewalls, anti-virus software and so on, which have to be purchased because of the likely insecurity of the original product. For the vendors, he said, software insecurity was an “externality … the cost is borne by us users.” Only if liability were to be placed upon vendors would they have “a bigger impetus to fix their products”

Of course, product liability might be a bit problemtatic for free software, but again Schneier has a solution:

Any imposition of liability upon vendors would also have to take account of the diversity of the market for software, in particular of the importance of the open source community. As open source software is both supplied free to customers, and can be analysed and tested for flaws by the entire IT community, it is both difficult and, arguably, inappropriate, to establish contractual obligations or to identify a single “vendor”. Bruce Schneier drew an analogy with “Good Samaritan” laws, which, in the United States and Canada, protect those attempting to help people who are sick or injured from possible litigation. On the other hand, he saw no reason why companies which took open source software, aggregated it and sold it along with support packages—he gave the example of Red Hat, which markets a version of the open source Linux operating system—should not be liable like other vendors.

Mr Dell Does the *In*decent Thing

I was wrong:

UK users will have to pay a premium for Dell's Linux PCs, despite Dell's claim to the contrary.

Customers who live in the UK will have to pay over one-third more than customers in the US for exactly the same machine, according to detailed analysis by ZDNet.co.uk.

The Linux PCs — the Inspiron 530n desktop and the Inspiron 6400n notebook — were launched on Wednesday. The 530n is available in both the UK and the US, but the price differs considerably.

Comparing identical specifications, US customers pay $619 (£305.10) for the 530n, while UK customers are forced to pay £416.61 — a premium of £111, or 36 percent. The comparison is based on a machine with a dual-core processor, 19" monitor, 1GB of RAM and a 160GB hard drive. The same options for peripherals were chosen.

Why?

Of Maths, Shares and Horoscopes

I have been a mathematician since the age of eight. As such, I tend to look at the world through the optics of mathematics. For this reason, I have never understood why people believe that they can model financial markets: they're clearly far too complex/chaotic to be reduced to any equation, and trying to extrapolate with computers - no matter how powerful - is just doomed to failure.

And so it seems:

I hear many Risk Arb players at big shops are getting creamed. It seemed like you make money for 3 years, then give it all back in a couple weeks. Classic mode-mean trade: mode is positive, mean is zero.

In fact, what is most surprising - nay, shocking - is that this apparently unshakeable belief in the existence of some formula/method that will one day allow such markets to be tracked accurately enough to make dosh consistently is equivalent to a belief in horoscopes. After all, horoscopes are all about "deep" correlations - between the stars and your life. Maybe financial markets should try casting a few - they'd be just as likely to succeed as the current methods. (Via TechDirt.)

09 August 2007

Quotation of the Day

Ha!

To mess up a Linux box, you need to work at it; to mess up your Windows box, you just have to work on it.

Pecunia non Olet

Doncha just love the sweet smell of business?

Medical firm Johnson & Johnson (J&J) is suing the American Red Cross, alleging the charity has misused the famous red cross symbol for commercial purposes.

J&J said a deal with the charity's founder in 1895 gave it the "exclusive use" of the symbol as a trademark for drug, chemical and surgical products.

It said American Red Cross had violated this agreement by licensing the symbol to other firms to sell certain goods.

The charity described the lawsuit as "obscene".

Code is Law is Code

Here's an interesting case:

When Dale Lee Underdahl was arrested on February 18, 2006, on suspicion of drunk driving, he submitted to a breath test that was conducted using a product called the Intoxilyzer 5000EN.

During a subsequent court hearing on charges of third-degree DUI, Underdahl asked for a copy of the "complete computer source code for the (Intoxilyzer) currently in use in the State of Minnesota."

An article in the Pioneer Press quoted his attorney, Jeffrey Sheridan, as saying the source code was necessary because otherwise "for all we know, it's a random number generator."

What's significant is that this shows a growing awareness that if you don't have the source code, you don't really have any idea how something works. And if you don't know that, you can hardly use it to make important decisions - or even unimportant ones, come to that. Obviously, this has clear implications for e-voting, and the need for complete source code transparency.

Firefox as Commons

Interesting post here from Mozilla's Mitchell Baker, which shows that she's beginning to regard Firefox as a commons:


Firefox generates an emotional response that is hard to imagine until you experience it. People trust Firefox. They love it. Many feel -- and rightly so -- that Firefox is part "theirs." That they are involved in creating Firefox and the Firefox phenomena, and in creating a better Internet. People who don't know that Firefox is open source love the results of open source -- the multiple languages, the extensions, the many ways people use the openness to enhance Firefox. People who don't know that Firefox is a public asset feel the results through the excitement of those who do know.

Firefox is created by a public process as a public asset. Participants are correct to feel that Firefox belongs to them.

Absolutely spot-on. But I had to smile at the following:

To start with, we want to create a part of online life that is explicitly NOT about someone getting rich. We want to promote all the other things in life that matter -- personal, social, educational and civic enrichment for massive numbers of people. Individual ability to participate and to control our own lives whether or not someone else gets rich through what we do. We all need a voice for this part of the Internet experience. The people involved with Mozilla are choosing to be this voice rather than to try to get rich.

I know that this may sound naive. But neither I nor the Mozilla project is that naive, and we are not stupid. We recognize that many of us are setting aside chances to make as much money as possible. We are choosing to do this because we want the Internet to be robust and useful even for activities that aren't making us rich.

Only in America do you need to explain why you prefer to make the world a better place rather than making yourself rich....

Welcome Back, HTML

Younger readers of this blog probably don't remember the golden cyber-age known as Dotcom 1.0, but one of its characteristics was the constant upgrading of the basic HTML specification. And then, in 1999, at HTML4, it stopped, as everyone got excited about XML (remember XML?).

It's been a long time coming, but at last we have HTML5, AKA Web Applications 1.0. Here's a good intro to the subject:

Development of Hypertext Markup Language (HTML) stopped in 1999 with HTML 4. The World Wide Web Consortium (W3C) focused its efforts on changing the underlying syntax of HTML from Standard Generalized Markup Language (SGML) to Extensible Markup Language (XML), as well as completely new markup languages like Scalable Vector Graphics (SVG), XForms, and MathML. Browser vendors focused on browser features like tabs and Rich Site Summary (RSS) readers. Web designers started learning Cascading Style Sheets (CSS) and the JavaScript™ language to build their own applications on top of the existing frameworks using Asynchronous JavaScript + XML (Ajax). But HTML itself grew hardly at all in the next eight years.

Recently, the beast came back to life. Three major browser vendors—Apple, Opera, and the Mozilla Foundation—came together as the Web Hypertext Application Technology Working Group (WhatWG) to develop an updated and upgraded version of classic HTML. More recently, the W3C took note of these developments and started its own next-generation HTML effort with many of the same members. Eventually, the two efforts will likely be merged. Although many details remain to be argued over, the outlines of the next version of HTML are becoming clear.

This new version of HTML—usually called HTML 5, although it also goes under the name Web Applications 1.0—would be instantly recognizable to a Web designer frozen in ice in 1999 and thawed today.

Welcome back, HTML, we've missed you.

Academics Waking Up to Wikipedia

Many people have a strangely ambivalent attitude to Wikipedia. On the one hand, they recognise that it's a tremendous resource; but on the other, they point out it's uneven and flawed in places. Academics in particular seem afflicted with this ambivalence.

So I think that this move by a group of academics to roll up their digital sleeves and get stuck into Wikipedia is important:

Some of our colleagues have determined to improve it with their own contributions. Here are some instances in which they have assumed significant responsibility for their fields:

# History of Science: Sage Ross and 80 other specialists in the field are contributing.
# Military History: Over 600 amateur and professional specialists in many sub-fields are contributing.
# Russian History: Marshall Poe and over 50 other specialists in the field are contributing.

Clearly, the more people that take part in such schemes, the better Wikipedia will get - and the more people will improve it further. (Via Open Access News.)

08 August 2007

Firefox....for Cubs

There's a new Firefox support site around that's aimed at absolute beginners. Smart move, now that Firefox is beginning to bleed beyond the world of geeks and their immediate family.... (Via Linux.com.)

The (Female) RMS of Tibet?

As a big fan of both freedom and Tibet, it seems only right that I should point to the Students for a Free Tibet site. Against a background of increasing repression and cultural genocide by the Chinese authorities in Tibet, it will be interesting to see what happens during the run-up to the 2008 Olympics and the games themselves. On the one hand, China would clearly love to portray itself as one big happy multi-ethnic family; on the other, it is unlikely to brook public reminders about its shameful invasion and occupation of Tibet.

I can only admire those Tibetans who speak up about this, and even daring to challenge, publicly, the Chinese authorities, even within China itself. One of the highest-profile - and hence most courageous - of these is Lhadon Tethong:

A Tibetan woman born and raised in Canada, Lhadon Tethong has traveled the world, working to build a powerful youth movement for Tibetan independence. She has spoken to countless groups about the situation in Tibet, most notably to a crowd of 66,000 at the 1998 Tibetan Freedom Concert in Washington, D.C. She first became involved with Students for a Free Tibet (SFT) in 1996, when she founded a chapter at University of King’s College in Halifax, Nova Scotia. Since then, Lhadon has been a leading force in many strategic campaigns, including the unprecedented victory against China’s World Bank project in 2000.

Lhadon is a frequent spokesperson for the Tibetan independence movement, and serves as co-chair of the Olympics Campaign Working Group of the International Tibet Support Network. She has worked for SFT since March 1999 and currently serves as the Executive Director of Students for a Free Tibet International.

She has a blog, called Beijing Wide Open, stuffed full of Tibetan Web 2.0 goodness. I'm sure RMS would approve. (Via Boing Boing.)

Update: Sigh: bad news already....

On the Necessity of Open Access and Open Data

One of the great things about open source is its transparency: you can't easily hide viruses or trojans, nor can you simply filch code from other people, as you can with closed source. Indeed, the accusations made from time to time that open source contains "stolen" code from other programs is deeply ironic, since it's almost certainly proprietary, closed software that has bits of thievery hidden deep within its digital bowels.

The same is true of open access and open data: when everything is out in the open, it is much easier to detect plagiarism or outright fraud. Equally, making it hard for people to access online, searchable text, or the underlying data by placing restrictions on its distribution reduces the number of people checking it and hence the likelihood that anyone will notice if something is amiss.

A nicely-researched piece on Ars Technica provides a clear demonstration of this:

Despite the danger represented by research fraud, instances of manufactured data and other unethical behavior have produced a steady stream of scandal and retractions within the scientific community. This point has been driven home by the recent retraction of a paper published in the journal Science and the recognition of a few individuals engaged in dozens of acts of plagiarism in physics journals.

By contrast, in the case of arXiv's preprint holdings, catching this stuff is relatively easy thanks to its open, online nature:

Computer algorithms to detect duplications of text have already proven successful at detecting plagiarism in papers in the physical sciences. The arXiv now uses similar software to scan all submissions for signs of plagiarized text. As this report was being prepared, the publishing service Crossref announced that it would begin a pilot program to index the contents of the journals produced by a number of academic publishers in order to expose them for the verification of originality. Thus, catching plagiarism early should be getting increasingly easy for the academic world.

Note, though, that open access allows *anyone* to check for plagiarism, not just the "authorised" keepers of the copyrighted academic flame.

Similarly, open data means anyone can take a peek, poke around and pick out problems:

How did Dr. Deb manage to create the impression that he had generated a solid data set? Roberts suggests that a number of factors were at play. Several aspects of the experiments allowed Deb to work largely alone. The mouse facility was in a separate building, and "catching a mouse embryo at the three-cell stage had him in from midnight until dawn," Dr. Roberts noted. Deb was also on his second post-doc position, a time where it was essential for him to develop the ability to work independently. The nature of the data itself lent it to manipulation. The raw data for these experiments consisted of a number of independent grayscale images that are normally assigned colors and merged (typically in Photoshop) prior to analysis.

Again, if the "raw data" were available to all, as good open notebook science dictates that they should be, any manipulation could be detected more readily.

Interestingly, this is not something that traditional "closed source" publishing can ever match using half-hearted fudges or temporary fixes, just as closed source programs can never match open ones for transparency. There is simply no substitute for openness.

OpenProj

For many years, the only decent free end-user app was GIMP, and the history of open source on the desktop has been one of gradually filling major holes - office suite, browser, email etc. - to bring it up to the level of proprietary offerings.

Happily, things have moved on, and it's now possible to use free software for practically any desktop activity. One major lack has been project planning, traditionally the (expensive) realm of Microsoft Project. No longer it seems. With the launch of OpenProj, the open source world now has a free alternative, for a variety of platforms.

It's still too early to say how capable the program is, but it's certainly a welcome addition. The only other concern is the licence, which seems not to have been chosen yet, although an OSI-approved variant is promised.

Update: Apparently, if I'd taken the trouble to install it, I would have seen that the licence is the Common Public Attribution Licence. (Thanks to Randy Metcalfe.)

07 August 2007

Patent Joke of the Month

It is, of course, hard to choose from the rather crowded field of contenders, but this one certainly takes the biscuit:

An Information and Application Distribution System (IADS) is disclosed. The IADS operates, in one embodiment, to distribute, initiate and allow interaction and communication within like-minded communities. Application distribution occurs through the transmission and receipt of an "invitation application" which contains both a message component and an executable component to enable multiple users to connect within a specific community. The application object includes functionality which allows the user's local computer to automatically set up a user interface to connect with a central controller which facilitates interaction and introduction between and among users.

A system to create an online community - including, of course, that brilliant stroke of utterly unique genius, the "invitation application": why couldn't I have thought of that? (Via TechCrunch.)

Mr. Dell Does the Decent Thing

Hooray:

today, it's official: Dell announced that consumers in the United Kingdom, France and Germany can order an Inspiron E1505N notebook or an Inspiron 530N desktop with Ubuntu 7.04 pre-installed.

(Via The Open Sourcerer.)

In Denial

This is an important story - not so much for what it says, but for the fact that it is being said by a major US title like Newsweek:

Since the late 1980s, this well-coordinated, well-funded campaign by contrarian scientists, free-market think tanks and industry has created a paralyzing fog of doubt around climate change. Through advertisements, op-eds, lobbying and media attention, greenhouse doubters (they hate being called deniers) argued first that the world is not warming; measurements indicating otherwise are flawed, they said. Then they claimed that any warming is natural, not caused by human activities. Now they contend that the looming warming will be minuscule and harmless. "They patterned what they did after the tobacco industry," says former senator Tim Wirth, who spearheaded environmental issues as an under secretary of State in the Clinton administration. "Both figured, sow enough doubt, call the science uncertain and in dispute. That's had a huge impact on both the public and Congress."

Even though the feature has little that's new, the detail in which it reports the cynical efforts of powerful industries to stymie attempts to mitigate the damage that climate change will cause is truly sickening. It is cold (sic) comfort that the people behind this intellectual travesty will rightly be judged extremely harshly by future generations - assuming we're lucky enough to have a future. (Via Open the Future.)

Why ICANN Is Evil, Part 58697

I've been tracking the goings-on at ICANN, which oversees domain names and many other crucial aspects of the Internet, for many years now, and I've yet to see anything good come out of the organisation. Here's someone else who has problems with them:

In this Article, I challenge the prevailing idea that ICANN's governance of the Internet's infrastructure does not threaten free speech and that ICANN's governance of the Internet therefore need not embody special protections for free speech. I argue that ICANN's authority over the Internet's infrastructure empowers it to enact regulations affecting speech within the most powerful forum for expression ever developed. ICANN cannot remain true to the democratic norms it was designed to embody unless it adopts policies to protect freedom of expression. While ICANN's recent self-evaluation and proposed reforms are intended to ensure compliance with its obligations under its governance agreement, these proposed reforms will render it less able to embody the norms of liberal democracy and less capable of protecting individuals' fundamental rights. Unless ICANN reforms its governance structure to render it consistent with the procedural and substantive norms of democracy articulated herein, ICANN should be stripped of its decision-making authority over the Internet's infrastructure.

Strip, strip, strip. (Via IGP blog.)

06 August 2007

Lenovo Today, Tomorrow the World

A small step, but one of an increasing number towards wider availability of open source on the desktop/laptop:

Lenovo and Novell today announced an agreement to provide preloaded Linux* on Lenovo ThinkPad notebook PCs and to provide support from Lenovo for the operating system. The companies will offer SUSE Linux Enterprise Desktop 10 from Novell to commercial customers on Lenovo notebooks including those in the popular ThinkPad T Series, a class of notebooks aimed at typical business users, beginning in the fourth quarter of 2007. The ThinkPad notebooks with the Linux-preload will also be available for purchase by individual customers.