15 August 2007

O'Reilly? I Think Not

Once again, Matt gets it, and Tim doesn't:

"I will predict that virtually every open source company (including Red Hat) will eventually be acquired by a big proprietary software company."

Thus spake Tim O'Reilly in the comments to one of his other posts. Tim believes that open source, at least as defined by open-source licensing, has a short shelf-life that will be consumed by Web 2.0 (i.e., web companies hijacking open-source software to deliver proprietary web services) or by traditional proprietary software vendors.

In other words, why don't I just give up, sell out, and go home? I guess I would if I thought that Tim were right. He's not, not in this instance.

There's something more fundamental going on here than "Proprietary software meets open source. Proprietary software decides to commandeer open source. Open source proves to be a nice lapdog to proprietary software." I actually believe that open source, not proprietary software, is the natural state of the industry, and that Tim's proprietary world is anomalous.

I particularly liked this distinction between the service aspects of software, and the attempts to view it as an instantiation of various intellectual monopolies:

Suddenly, the license matters more, not less, because it is the license that ensures the conversation focuses on the right topic - service - rather than on inane jabberings that only vendors care about. You know, like intellectual property.

And there's another crucial reason why proprietary software companies can't just open their chequebooks and acquire those pesky open source upstarts. Unlike companies who seem to think that they are co-extensive with the intellectual monopolies they foist on customers, open source outfits know they are defined by the high-quality people - both employees and those out in the community - that code for the customers.

For example, one reason people take out subscriptions to Red Hat's offerings is that they get to stand in line for the use of Alan Cox's brain. Imagine, now, that proprietary company X "buys" Red Hat: well, what exactly does it buy? Certainly not Alan Cox's brain, which will leave with him (one hopes) when he moves immediately to another open source company (or just hacks away in Wales for pleasure). Sure, the purchaser will have all kinds of impressive legal documents spelling out what it "owns" - but precious little to offer customers anymore, who are likely to follow wherever Alan Cox and his ilk go.

The (Uncommon) Fedora Commons

When I first heard about Fedora Commons I naively assumed it had something to do with the Linux distro Fedora, but I was wrong:

Fedora Commons is a non-profit organization providing sustainable technologies to create, manage, publish, share and preserve digital content as a basis for intellectual, organizational, scientific and cultural heritage by bringing two communities together.

Communities of practice that include scholars, artists, educators, Web innovators, publishers, scientists, librarians, archivists, publishers, records managers, museum curators or anyone who presents, accesses, or preserves digital content.

Software developers who work on the cutting edge of open source Web and enterprise content technologies to ensure that collaboratively created knowledge is available now and in the future.

Fedora Commons is the home of the unique Fedora open source software, a robust integrated repository-centered platform that enables the storage, access and management of virtually any kind of digital content.

So not only is Fedora an organisation - recently funded to the tune of $4.9 million by the Gordon and Betty Moore Foundation - aiming to create a commons of "intellectual, organizational, scientific and cultural heritage", but it is also a piece of code:

Institutions and organizations face increasing demands to deliver rich digital content. A scan of the web reveals complex multi-media content that combines text, images, audio, and video. Much of this content is produced dynamically through the use of servlet technology and distributed web services.

Delivery of rich content is possible through a variety of technologies. But, delivery is only one aspect of a suite of content management tasks. Content needs to be created, ingested, and stored. It needs to be aggregated and organized in collections. It must be described with metadata. It must be available for reuse and refactoring. And, finally, it must be preserved.

Without some form of standardization, the costs of such management tasks become prohibitive. Content managers find themselves jury-rigging tasks onto each new form of digital content. In the end, they are faced with a maze of specialized tools, repositories, formats, and services that must be upgraded and integrated over time.

Content managers need a flexible content repository system that allows them to uniformly store, manage, and deliver all their existing content and that will accommodate new forms that will inevitably arise in the future.

Fedora is an open source digital repository system that meets these challenges.

In fact, Fedora is nothing less than "Flexible Extensible Digital Object Repository Architecture". So the name is logical - pity it's so confusing in the context of open source.

Linux Weather Forecast

Get your umbrellas out for the Linux Weather Forecast:

The need for a Linux Weather Forecast arises out of Linux’s unique development model. With proprietary software, product managers define a “roadmap” they deliver to engineers to implement, based on their assessments of what users want, generally gleaned from interactions with a few customers. While these roadmaps are publicly available, they are frequently not what actually gets technically implemented and are often delivered far later than the optimistic timeframes promised by proprietary companies.

Conversely, in Linux and open source software, users contribute directly to the software, setting the direction with their contributions. These changes can quickly get added to the mainline kernel and other critical packages, depending on quality and usefulness. This quick feedback and development cycle results in fast software iterations and rapid feature innovation. A new kernel version is generally released every three months, new desktop distributions every six months, and new enterprise distributions every 18 months.

While the forecast is not a roadmap or centralized planning tool, the Linux Weather Forecast gives users, ISVs, partners and developers a chance to track major developments in Linux and adjust their business accordingly, without having to comb through mailing lists of the thousands of developers currently contributing to Linux. Through the Linux Weather Forecast, users and ecosystem members can track the amazing innovation occurring in the Linux community. This pace of software innovation is unmatched in the history of operating systems. The Linux Weather Forecast will help disseminate the right information to the ever growing audience of Linux developers and users in the server, desktop and mobile areas of computing, and will complement existing information available from distributions in those areas.

Good to see Jonathan Corbet, editor of LWN.net, for whom I write occasionally, spreading some of his deep kernelly knowledge in this way.

14 August 2007

RSS as the Lubricant of Openness

Facebook creaks open a little more - and RSS is the lubricant. (Via TechCrunch.)

GiveMeaning? - Give Me a Break

I wrote recently about the plight of the Tibetan people. One of the problems is that it is hard for an average non-Tibetan to do much to help the situation. So I was pleased that Boing Boing pointed me to what sounded a worthy cause that might, even if in a small way, help preserve Tibetan culture:

The Tibetan Endangered Music Project has so far recorded about 400 endangered traditional Tibetan songs. We now have the opportunity to make these songs available online, at a leading Tibetan language website (www.tibettl.com). However, this volunteer run website is unable to fund hosting for our material. The cost of hosting space is 1.5 RMB (less than 20 US cents) for every MB. One song in mp3 format is approximately 1.5 MB. 1900 USD would allow us to buy 10 GB of hosting space, which will take care of all our needs for the forseeable future (allowing 6700 1.5 MB songs to be uploaded). It would also allow us to expand to video hosting in the future, or to provide high quality (.wav) formats instead of only compressed mp3 format.

Wow - preserving the Tibetan musical commons for the Tibetans: sign me up, I thought.

So I did sign up. But that's where the problem began.

Despite being signed up and in, I could not - can not - find anywhere to give money to this lot. Now, naively, I would have thought that a site called GiveMeaning, expressly designed to help people give money to worthy causes, would, er, you know, help people give money, maybe with a big button saying "GIVE NOW". But what do I know? I've only been using the Web for about 14 years, so maybe I'm still a little wet behind the ears.

On the other hand, it could just be that this is one of the most stupid sites in the known universe, designed to drive altruists mad as a punishment for wanting to help others. Either way, it looks like the Tibetan musical commons is going to have to do without my support, which is a pity.

Why Openness Matters - Doubly

Here's a great demonstration of why openness is so important.

Wikipedia is famously open, so in general anyone can edit stuff. But this editing is also done in the open, in that all changes are tracked. Now, some people edit anonymously, but their IP addresses are logged. This information too is freely available, so here's an idea that some bright chap had:

Griffith thus downloaded the entire encyclopedia, isolating the XML-based records of anonymous changes and IP addresses. He then correlated those IP addresses with public net-address lookup services such as ARIN, as well as private domain-name data provided by IP2Location.com.

The result: A database of 5.3 million edits, performed by 2.6 million organizations or individuals ranging from the CIA to Microsoft to Congressional offices, now linked to the edits they or someone at their organization's net address has made.

As a result, dedicated crowd-sourcers are poring over Wikipedia, digging out those embarrassing self-edits. For example:

On Christmas Eve 2004, a Disney user deleted a citation on the "digital rights management" page to DRM critic Cory Doctorow along with a link to a speech he gave to Microsoft's Research Group on the subject. Later, a Disney user altered the "opponents" discussion of the entry, arguing that consumers embrace DRM: "In general, consumers knowingly enter into the arrangement where they are granted limited use of the content."

or:

"Removed ECHELON link, irrelevant to article," reads the comment explaining this cut. The contributor's IP address belongs to the National Security Agency.

or even:

Microsoft's MSN Search is now "a major competitor to Google". Take it from this anonymous contributor, whose IP address belongs to Waggener Edstrom, Microsoft's PR firm.

Now that's what I call openness.

Amazon Goes Lulu

I'm a big fan of Lulu.com, the self-publishing company, not least because the man behind it, Bob Young, also co-founded Red Hat, and is one of the most passionate defenders of the open source way I have come across.

So the news that CreateSpace is going into the on-demand publishing business is interesting - especially since the company is a subsidiary of Amazon, which means that self-published authors will be able to hitch a ride on the Amazon behemoth. But as far as I can tell, Lulu still offers a more thorough vision, with its global reach and finer-grained publishing options. But if nothing else, Amazon's entry into this space will serve to validate the whole idea in the eyes of doubters.

Google Books: A Cautionary Tale

Google Books is important:

the Google Project has, however unintentionally, made not only conventional libraries themselves, but other projects digitizing cultural artifacts appear inept or inadequate. Project Gutenberg and its 17,000 books in ascii appear insignificant and superfluous beside the millions of books that Google is contemplating. So do most scanning projects by conventional libraries. As a consequence of the assumed superiority of Google’s approach, therefore, it is highly unlikely that either the funds or the energies for an alternative project of similar magnitude will become available, nor are the libraries who are lending their books (at significant costs to their funds, their books, and their users) likely to undertake such an effort a second time. With each scanned page, Google Books’ Library Project, by its quantity if not necessarily by its quality, makes the possibility of a better alternative unlikely. The Project may then become the library of the future, whatever its quality, by default. So it does seems important to probe what kind of quality Google Book Project might present to an ordinary user that Google envisages wanting to find a book.

But also unsatisfactory:

The Google Books Project is no doubt an important, in many ways invaluable, project. It is also, on the brief evidence given here, a highly problematic one. Relying on the power of its search tools, Google has ignored elemental metadata, such as volume numbers. The quality of its scanning (and so we may presume its searching) is at times completely inadequate. The editions offered (by search or by sale) are, at best, regrettable.

Rather worrying. (Via O'Reilly Radar.)

Microsoft Bends its Knee to the OSI

So, Microsoft has finally done it, and submitted two of its licences to the OSI for approval. Here's my earlier analysis of what's going on here.

A Public Enquiry into the Public Domain

The public domain is a vastly underappreciated resource - which probably explains why there have been so many successful assaults on it in recent years through copyright, patent and trademark extensions. But now, it seems, people are starting to wake up to its central importance for the digital world:

The new tools of the information society make that public domain material has a considerable potential for re-use - by citizens or for new creative expressions (e.g. documentaries, services for tourism, learning material). It contains published works, such as literary or artistic works, music and audiovisual material for which copyright has expired, material that has been assigned to the public domain by the right holders or by law, mathematical methods, algorithms, methods of presenting information and raw data, such as facts and numbers. A rich public domain has, logically, the potential to stimulate the further development of the information society. It would provide creators – e.g. documentary makers, musicians, multimedia producers, but also schoolchildren doing a Web project – with raw material that they can build on and experiment with, without high transaction or other costs. This is particularly important in the digital context, where the integration of existing material has become much easier.

Although there is some evidence of its importance, there has been no systematic attempt to map or measure its social and economic impact. This is a problem when addressing policy issues that build on public domain material (e.g. digital libraries) or that have an impact on the public domain (e.g. discussions on intellectual property instruments) in the digital age.

The European Union aims to remedy this lack with a study:

Call for tender: "Assessment of the Economic and Social impact of the Public Domain in the Information Society" was published today in the Supplement to the Official Journal of the European Union 2007/S 151-187363. The envisaged purpose of the assessment is to analyse the economic and social impact of the public domain and to gauge its potential to contribute for the benefit of the citizens and the economy.

Portuguese Ministry of Education Goes Free

The Portuguese Ministry of Education is doing the sensible thing and giving away a CD full of free (Windows) software to 1.6 million students, saving itself (and the taxpayers) around 300 million Euros. Nothing amazing about that, perhaps, since it's a sensible thing to do (not that everyone does it).

What's more interesting, for me, at least, is the set of software included on the CD:

* OpenOffice.org
* Firefox
* Thunderbird
* NVU
* Inkscape
* GIMP

These are pretty much the cream of the free software world, and show the increasing depths of desktop apps. Also interesting are the specifically educational programs included:

* Freemind and CmapTools
* Celestia
* Geogebra
* JMOL
* Modellus

Some of these were new to me, notably Geogebra:
GeoGebra is a free and multi-platform dynamic mathematics software for schools that joins geometry, algebra and calculus.

and Modellus (which isn't actually free software, just free):

Modellus enables students and teachers (high school and college) to use mathematics to create or explore models interactively.

It's always surprised me that that more use isn't made of free software in education, since the benefits are obvious: by pooling efforts, duplication is eliminated, and the quality of tools improved. (Via Erwin Tenhumberg.)

13 August 2007

Red Hat Meets Eclipse

Here's an interesting example of major open source projects meeting to produce a highly-targeted commercial product:

Red Hat Developer Studio is a set of eclipse-based development tools that are pre-configured for JBoss Enterprise Middleware Platforms and Red Hat Enterprise Linux. Developers are not required to use Red Hat Developer Studio to develop on JBoss Enterprise Middleware and/or Red Hat Linux. But, many find these pre-configured tools offer significant time-savings and value, making them more productive and speeding time to deployment.

Google's Gift of Taking

Absolutely:

It's not often that Google kills off one of its services, especially one which was announced with much fanfare at a big mainstream event like CES 2006. Yet Google Video's commercial aspirations have indeed been terminated: the company has announced that it will no longer be selling video content on the site. The news isn't all that surprising, given that Google's commercial video efforts were launched in rather poor shape and never managed to take off. The service seemed to only make the news when embarrassing things happened.

Yet now Google Video has given us a gift—a "proof of concept" in the form of yet another argument against DRM—and an argument for more reasonable laws governing copyright controls. How could Google's failure be our gain? Simple. By picking up its marbles and going home, Google just demonstrated how completely bizarre and anti-consumer DRM technology can be. Most importantly, by pulling the plug on the service, Google proved why consumers have to be allowed to circumvent copy controls.

12 August 2007

The Real Spectrum Commons

I have referred to radio spectrum as a commons several times in this blog. But there's a problem: since spectrum seems to be rivalrous - if I have it, you can't - this means that the threat of a tragedy of the commons has to be met by regulation. And that, as we see, is often unsatisfactory, not least because powerful companies usually get the lion's share.

But it seems - luckily - I was wrong about spectrum necessarily being rivalrous:

Software defined radio that is beginning to emerge from the labs into actual tests has the ability to render all spectrum management moot. Small wonder that the legal mandarins there have begun to sneer that open source SDR cannot be trusted.

In other words, when you make radio truly digital, it can be intelligent, and simply avoid the problem of commons over-use.

11 August 2007

Irony in the Blood

Well spotted:


To recap:

1. In all likelihood, fossil fuel emissions are one of the primary causes of global warming;

2. global warming has melted the Arctic ice cap faster than any time on record; so

3. Russia, Denmark, Canada, and the United States are racing to make a no-more-land grab in the Arctic; in order to

4. claim fossil fuel drilling rights for the Arctic seabed.

Middle Kingdom Patently on the Way to the Top

This could have interesting repercussions:

China has seen a sharp increase in requests for patents, according to the UN's intellectual property agency.

The number of requests for patents in China grew by 33% in 2005 compared with the previous year.

That gives it the world's third highest number behind Japan and the United States, the agency said.

Why is this important? Well, currently, patents are being pushed largely by the US as a way of asserting itself economically, notably against that naughty China, which, it is frequently claimed, just rips off the West's ideas. But as China becomes one of the world's leading holders of patents, we can expect to see it start asserting those against everyone else - including the US. Which might suddenly find that it is not quite so keen on those unfair intellectual monopolies after all....

SCO KO'd, Novell Renewed

Well, we all knew it would happen, and, finally, it has:

Judge Dale Kimball has issued a 102-page ruling [PDF] on the numerous summary judgment motions in SCO v. Novell. Here it is as text. Here is what matters most:

[T]he court concludes that Novell is the owner of the UNIX and UnixWare Copyrights.

That's Aaaaall, Folks! The court also ruled that "SCO is obligated to recognize Novell's waiver of SCO's claims against IBM and Sequent". That's the ball game. There are a couple of loose ends, but the big picture is, SCO lost. Oh, and it owes Novell a lot of money from the Microsoft and Sun licenses.

But there's another interesting aspect to this: SCO lost, and Novell won:

But we must say thank you to Novell and especially to its legal team for the incredible work they have done. I know it's not technically over and there will be more to slog through, but they won what matters most, and it's been a plum pleasin' pleasure watching you work. The entire FOSS community thanks you for your skill and all the hard work and thanks go to Novell for being willing to see this through.

As I've written elsewhere, we really can't let Novell fail, whatever silliness it gets up to with Microsoft: it is simply too important for these kinds of historical reasons.

Update: Here's some nice analysis of the implications.

10 August 2007

The Liability of Closed Source Software

It's a pity that reports from the House of Lord's Science and Technology Committee are so long, because they contain buckets of good stuff - not least because they draw on top experts. A case in point is the most recent, looking at personal Internet security, which includes luminaries such as Bruce Schneier and Alan Cox.

The recommendations are a bit of a mixed bag, but one thing that caught my eye was in the context of making suppliers liable for their software. As Bruce puts it:

“We are paying, as individuals, as corporations, for bad security of products”—by which payment he meant not only the cost of losing data, but the costs of additional security products such as firewalls, anti-virus software and so on, which have to be purchased because of the likely insecurity of the original product. For the vendors, he said, software insecurity was an “externality … the cost is borne by us users.” Only if liability were to be placed upon vendors would they have “a bigger impetus to fix their products”

Of course, product liability might be a bit problemtatic for free software, but again Schneier has a solution:

Any imposition of liability upon vendors would also have to take account of the diversity of the market for software, in particular of the importance of the open source community. As open source software is both supplied free to customers, and can be analysed and tested for flaws by the entire IT community, it is both difficult and, arguably, inappropriate, to establish contractual obligations or to identify a single “vendor”. Bruce Schneier drew an analogy with “Good Samaritan” laws, which, in the United States and Canada, protect those attempting to help people who are sick or injured from possible litigation. On the other hand, he saw no reason why companies which took open source software, aggregated it and sold it along with support packages—he gave the example of Red Hat, which markets a version of the open source Linux operating system—should not be liable like other vendors.

Mr Dell Does the *In*decent Thing

I was wrong:

UK users will have to pay a premium for Dell's Linux PCs, despite Dell's claim to the contrary.

Customers who live in the UK will have to pay over one-third more than customers in the US for exactly the same machine, according to detailed analysis by ZDNet.co.uk.

The Linux PCs — the Inspiron 530n desktop and the Inspiron 6400n notebook — were launched on Wednesday. The 530n is available in both the UK and the US, but the price differs considerably.

Comparing identical specifications, US customers pay $619 (£305.10) for the 530n, while UK customers are forced to pay £416.61 — a premium of £111, or 36 percent. The comparison is based on a machine with a dual-core processor, 19" monitor, 1GB of RAM and a 160GB hard drive. The same options for peripherals were chosen.

Why?

Of Maths, Shares and Horoscopes

I have been a mathematician since the age of eight. As such, I tend to look at the world through the optics of mathematics. For this reason, I have never understood why people believe that they can model financial markets: they're clearly far too complex/chaotic to be reduced to any equation, and trying to extrapolate with computers - no matter how powerful - is just doomed to failure.

And so it seems:

I hear many Risk Arb players at big shops are getting creamed. It seemed like you make money for 3 years, then give it all back in a couple weeks. Classic mode-mean trade: mode is positive, mean is zero.

In fact, what is most surprising - nay, shocking - is that this apparently unshakeable belief in the existence of some formula/method that will one day allow such markets to be tracked accurately enough to make dosh consistently is equivalent to a belief in horoscopes. After all, horoscopes are all about "deep" correlations - between the stars and your life. Maybe financial markets should try casting a few - they'd be just as likely to succeed as the current methods. (Via TechDirt.)

09 August 2007

Quotation of the Day

Ha!

To mess up a Linux box, you need to work at it; to mess up your Windows box, you just have to work on it.

Pecunia non Olet

Doncha just love the sweet smell of business?

Medical firm Johnson & Johnson (J&J) is suing the American Red Cross, alleging the charity has misused the famous red cross symbol for commercial purposes.

J&J said a deal with the charity's founder in 1895 gave it the "exclusive use" of the symbol as a trademark for drug, chemical and surgical products.

It said American Red Cross had violated this agreement by licensing the symbol to other firms to sell certain goods.

The charity described the lawsuit as "obscene".

Code is Law is Code

Here's an interesting case:

When Dale Lee Underdahl was arrested on February 18, 2006, on suspicion of drunk driving, he submitted to a breath test that was conducted using a product called the Intoxilyzer 5000EN.

During a subsequent court hearing on charges of third-degree DUI, Underdahl asked for a copy of the "complete computer source code for the (Intoxilyzer) currently in use in the State of Minnesota."

An article in the Pioneer Press quoted his attorney, Jeffrey Sheridan, as saying the source code was necessary because otherwise "for all we know, it's a random number generator."

What's significant is that this shows a growing awareness that if you don't have the source code, you don't really have any idea how something works. And if you don't know that, you can hardly use it to make important decisions - or even unimportant ones, come to that. Obviously, this has clear implications for e-voting, and the need for complete source code transparency.

Firefox as Commons

Interesting post here from Mozilla's Mitchell Baker, which shows that she's beginning to regard Firefox as a commons:


Firefox generates an emotional response that is hard to imagine until you experience it. People trust Firefox. They love it. Many feel -- and rightly so -- that Firefox is part "theirs." That they are involved in creating Firefox and the Firefox phenomena, and in creating a better Internet. People who don't know that Firefox is open source love the results of open source -- the multiple languages, the extensions, the many ways people use the openness to enhance Firefox. People who don't know that Firefox is a public asset feel the results through the excitement of those who do know.

Firefox is created by a public process as a public asset. Participants are correct to feel that Firefox belongs to them.

Absolutely spot-on. But I had to smile at the following:

To start with, we want to create a part of online life that is explicitly NOT about someone getting rich. We want to promote all the other things in life that matter -- personal, social, educational and civic enrichment for massive numbers of people. Individual ability to participate and to control our own lives whether or not someone else gets rich through what we do. We all need a voice for this part of the Internet experience. The people involved with Mozilla are choosing to be this voice rather than to try to get rich.

I know that this may sound naive. But neither I nor the Mozilla project is that naive, and we are not stupid. We recognize that many of us are setting aside chances to make as much money as possible. We are choosing to do this because we want the Internet to be robust and useful even for activities that aren't making us rich.

Only in America do you need to explain why you prefer to make the world a better place rather than making yourself rich....

Welcome Back, HTML

Younger readers of this blog probably don't remember the golden cyber-age known as Dotcom 1.0, but one of its characteristics was the constant upgrading of the basic HTML specification. And then, in 1999, at HTML4, it stopped, as everyone got excited about XML (remember XML?).

It's been a long time coming, but at last we have HTML5, AKA Web Applications 1.0. Here's a good intro to the subject:

Development of Hypertext Markup Language (HTML) stopped in 1999 with HTML 4. The World Wide Web Consortium (W3C) focused its efforts on changing the underlying syntax of HTML from Standard Generalized Markup Language (SGML) to Extensible Markup Language (XML), as well as completely new markup languages like Scalable Vector Graphics (SVG), XForms, and MathML. Browser vendors focused on browser features like tabs and Rich Site Summary (RSS) readers. Web designers started learning Cascading Style Sheets (CSS) and the JavaScript™ language to build their own applications on top of the existing frameworks using Asynchronous JavaScript + XML (Ajax). But HTML itself grew hardly at all in the next eight years.

Recently, the beast came back to life. Three major browser vendors—Apple, Opera, and the Mozilla Foundation—came together as the Web Hypertext Application Technology Working Group (WhatWG) to develop an updated and upgraded version of classic HTML. More recently, the W3C took note of these developments and started its own next-generation HTML effort with many of the same members. Eventually, the two efforts will likely be merged. Although many details remain to be argued over, the outlines of the next version of HTML are becoming clear.

This new version of HTML—usually called HTML 5, although it also goes under the name Web Applications 1.0—would be instantly recognizable to a Web designer frozen in ice in 1999 and thawed today.

Welcome back, HTML, we've missed you.