21 June 2010

Copyright Ratchet, Copyright Racket

I can't believe this.

A few days ago I wrote about the extraordinary extra monopolies the German newspaper industry wanted - including an exemption from anti-cartel laws. I also noted:


And make no mistake: if Germany adopts this approach, there will be squeals from publishers around the world demanding "parity", just as there have been with the term of copyright. And so the ratchet will be turned once more.

And what do we find? Why, exactly the same proposals *already* in an FTC "Staff Discussion Draft" [.pdf], which is trying to come up with ways to solve the newspaper industry's "problem" without actually addressing the key issue, which is that people are accessing information online in new ways these days. The document looks at some of the proposed "solutions", which come from the industry, which wants - of course - more monopoly powers:

Internet search engines and online news aggregators often use content from news organizations without paying for that use. Some news organizations have argued that existing intellectual property (IP) law does not sufficiently protect their news stories from free riding by news aggregators. They have suggested that expanded IP rights for news stories would better enable news organizations to obtain revenue from aggregators and search engines.

And:

Advocates argue “the copyright act allows parasitic aggregators to ‘free ride’ on others’ substantial journalistic investments,” by protecting only expression and not the underlying facts, which are often gathered at great expense.

...

They suggest that federal hot news legislation could help address revenue problems facing newspapers by preventing this free-riding.

Moreover, like the German publishers, they also want a Get Out of Jail Free card as far as anti-trust is concerned:

Some in the news industry have suggested that an antitrust exemption is necessary to the survival of news organizations and point to the NPA as precedent for Congress to enact additional protections from the antitrust laws for newspapers. For example, one public comment recommended “the passage of a temporary antitrust exemption to permit media companies to collaborate in the public interest”

Got that? An anti-trust exemption that would allow newspaper to operate as a cartel *in the public interest*. George Orwell would have loved it.

Follow me @glynmoody on Twitter or identi.ca.

Globish, Glanglish and Google Translate

There's a new book out about the rise and use of a globalised English, dubbed "Globish":

Globish is a privatised lingua franca, a commercially driven “world language” unencumbered by the utopian programme of Esperanto. As taught by Nerrière’s enterprise, it combines the coarseness of a distended phrase book and the formulaic optimism of self-help texts – themselves a genre characterised by linguistic paucity, catchphrases and religiose simplicity.

I won't be buying it, mostly because I wrote about the rise and use of a globalised English, dubbed "Glanglish", over 20 years ago. It formed the title essay of a book called, with stunning originality, "Glanglish." This is what I wrote:

English has never existed as a unitary language. For the Angles and the Saxons it was a family of siblings; today it is a vast clan in diaspora. At the head of that clan is the grand old matriarch, British English. Rather quaint now, like all aristocrats left behind by a confusing modern world, she nonetheless has many points of historical interest. Indeed, thousands come to Britain to admire her venerable and famous monuments, preserved in the verbal museums of language schools. Unlike other parts of our national heritage, British English is a treasure we may sell again and again; already the invisible earnings from this industry are substantial, and they are likely to grow as more and more foreigners wish at least to brush their lips across the Grande Dame's ring.

One group unlikely to do so are the natural speakers of the tongue from other continents. Led by the Americans, and followed by the Australians, the New Zealanders and the rest, these republicans are quite content to speak English - provided it is their English. In fact it is likely to be the American's English, since this particular branch of the family tree is proving to be the most feisty in its extension and transformation of the language. Even British English is falling in behind - belatedly, and with a rueful air; but compared to its own slim list of neologisms - mostly upper-class twittish words like 'yomping' - Americanese has proved so fecund in devising new concepts, that its sway over English-thinking minds is assured.

An interesting sub-species of non-English English is provided by one of the dialects of modern India. Indian English is not a truly native tongue, if only for historical reasons; and yet it is no makeshift second language. Reading the 'Hindu Times', it is hard to pin down the provenance of the style: with its orotundities and its 'chaps' it is part London 'Times' circa 1930; with its 'lakhs' it is part pure India.

Whatever it is, it is not to be compared with the halting attempts at English made by millions - perhaps billions soon - whose main interest is communication. Although a disheartening experience to hear for the true-blue Britisher, this mangled, garbled and bungled English is perhaps the most exciting. For from its bleeding hunks and quivering gobbets will be constructed the first and probably last world language. Chinese may have more natural speakers, and Spanish may be gaining both stature and influence, but neither will supersede this mighty mongrel in the making.

English is so universally used as the medium of international linguistic exchange, so embedded in supranational activities like travel - all pilots use English - and, even more crucially, so integral to the world of business, science and technology - money may talk, but it does so in English, and all computer programs are written in that language - that no amount of political or economic change or pressure will prise it loose. Perhaps not even nuclear Armageddon: Latin survived the barbarians. So important is this latest scion of the English stock, that it deserves its own name; and if the bastard brew of Anglicised French is Franglais, what better word to celebrate the marriage of all humanity and English to produce tomorrow's global language than the rich mouthful of 'Glanglish'?

Twenty years on, I now think that the reign of Glanglish/Globish will soon draw to a close, but not because something else will take its place.

The obvious candidate, Chinese, suffers from a huge problem: linguistic degeneracy. By which I mean that a single word - "shi", say - corresponds to over 70 different concepts if you ignore the tones. Even if you can distinguish clearly between the four tones - which few beginners can manage with much consistency - saying the word "shi" will still be much harder to interpret than a similarly-mangled English word, especially for non-native speakers. This makes it pretty useless as a lingua franca, which needs to be both easy to acquire, and easy to understand even by novices.

But something is happening that I hadn't allowed for two decades ago: machine translation. Just look at Google Translate, which I use quite a lot to provide rough translations of interesting stuff that I find on non-English language sites. It's pretty good, getting better - and free. I'm sure that Google is working on producing something similar for spoken language: imagine what a winner Google Voice Translate for Android would be.

So instead of Globish or Glanglish, I think that increasingly people will simply speak their own language, and let Google et al. handle the rest. In a way, that's great, because it will allow people to communicate directly with more or less anyone anywhere. But paradoxically it will probably also lead to people becoming more parochial and less aware of cultural differences around the globe, since few will feel the need to undergo that mind-expanding yet humbling experience of trying to learn a foreign language - not even Glanglish.

Follow me @glynmoody on Twitter or identi.ca.

Something in the Air: the Open Source Way

One of the most vexed questions in climate science is modelling. Much of the time the crucial thing is trying to predict what will happen based on what has happened. But that clearly depends critically on your model. Improving the robustness of that model is an important aspect, and the coding practices employed obviously feed into that.

Here's a slightly old but useful paper [.pdf] that deals with just this topic:

In this paper, we report on a detailed case study of the Climate scientists build large, complex simulations with little or no software engineering training, and do not readily adopt the latest software engineering tools and tech-niques. In this paper, we describe an ethnographic study of the culture and practices of climate scientists at the Met Office Hadley Centre. The study examined how the scientists think about software correctness, how they prioritize requirements, and how they develop a shared understanding of their models. The findings show that climate scientists have developed customized techniques for verification and validation that are tightly integrated into their approach to scientific research. Their software practices share many features of both agile and open source projects, in that they rely on self-organisation of the teams, extensive use of informal communication channels, and developers who are also users and domain experts. These comparisons offer insights into why such practices work.

It would be interesting to know whether the adoption of elements of the open source approach was a conscious decision, or just evolved.

Follow me @glynmoody on Twitter or identi.ca.

20 June 2010

Should Retractions be Behind a Paywall?

"UN climate panel shamed by bogus rainforest claim", so proclaimed an article in The Times earlier this year. It began [.pdf]:

A STARTLING report by the United Nations climate watchdog that global warming might wipe out 40% of the Amazon rainforest was based on an unsubstantiated claim by green campaigners who had little scientific expertise.

Well, not so unsubstantiated, it turns out: The Times has just issued a pretty complete retraction - you can read the whole thing here. But what interests me in this particular case is not the science, but the journalistic aspect.

Because if you went to The Times site to read that retraction, you would, of course, be met by the stony stare of the latter's paywall (assuming you haven't subscribed). Which means that I - and I imagine many people who read the first, inaccurate Times story - can't read the retraction there. Had it not been for the fact that among the many climate change sites (on both sides) that I read, there was this one with a copy, I might never have known.

So here's the thing: if a story has appeared on the open Web, and needs to be retracted, do newspapers like The Times have a duty to post that retraction in the open, or is acceptable to leave behind the paywall?

Answers on the back of yesterday's newspaper...

Follow me @glynmoody on Twitter or identi.ca.

Open Source Scientific Publishing

Since one of the key ideas behind this blog is to explore the application of the open source approach to other fields, I was naturally rather pleased to come across the following:

As a software engineer who works on open source scientific applications and frameworks, when I look at this, I scratch my head and wonder "why don't they just do the equivalent of a code review"? And that's really, where the germ of the idea behind this blog posting started. What if the scientific publishing process were more like an open source project? How would the need for peer-review be balanced with the need to publish? Who should bear the costs? Can a publishing model be created that minimizes bias and allows good ideas to emerge in the face of scientific groupthink?

It's a great question, and the post goes some way to sketching out how that might work in practice. It also dovetails nicely with my earlier post about whether we need traditional peer review anymore. Well worth reading.

Follow me @glynmoody on Twitter or identi.ca.

19 June 2010

Open Source: A Question of Evolution

I met Matt Ridley once, when he was at The Economist, and I wrote a piece for him (I didn't repeat the experience because their fees at the time were extraordinarily ungenerous). He was certainly a pleasant chap in person, but I have rather mixed feelings about his work.

His early book "Genome" is brilliant - a clever promenade through our chromosomes, using the DNA and its features as a framework on which to hang various fascinating facts and figures. His latest work, alas, seems to have gone completely off the rails, as this take-down by George Monbiot indicates.

Despite that, Ridley is still capable of some valuable insights. Here's a section from a recent essay in the Wall Street Journal, called "Humans: Why They Triumphed":

the sophistication of the modern world lies not in individual intelligence or imagination. It is a collective enterprise. Nobody—literally nobody—knows how to make the pencil on my desk (as the economist Leonard Read once pointed out), let alone the computer on which I am writing. The knowledge of how to design, mine, fell, extract, synthesize, combine, manufacture and market these things is fragmented among thousands, sometimes millions of heads. Once human progress started, it was no longer limited by the size of human brains. Intelligence became collective and cumulative.

In the modern world, innovation is a collective enterprise that relies on exchange. As Brian Arthur argues in his book "The Nature of Technology," nearly all technologies are combinations of other technologies and new ideas come from swapping things and thoughts.

This is, of course, a perfect description of the open source methodology: re-using and building on what has gone before, combining the collective intelligence of thousands of hackers around the world through a culture of sharing. Ridley's comment is another indication of why anything else just hasn't made the evolutionary jump.

Follow me @glynmoody on Twitter or identi.ca.

18 June 2010

German Publishers Want More Monopoly Rights

Here's an almost unbelievable piece about what's happening in Germany right now:

It looks as if publishers might really be lobbying for obtaining a new exclusive right conferring the power to monopolise speech e.g. by assigning a right to re-use a particular wording in the headline of a news article anywhere else without the permission of the rights holder. According to the drafts circulating in the internet, permission shall be obtainable exclusively by closing an agreement with a new collecting society which will be founded after the drafts have matured into law. Depending on the particulars, new levies might come up for each and every user of a PC, at least if the computer is used in a company for commercial purposes.

Well, obtaining monopoly protection for sentences and even parts of sentences in a natural language appears to be some kind of very strong meat. This would mean that publishers can control the wording of news messages. This comes crucially close to private control on the dissemination of facts.

But guess what? Someone thinks that German publishers aren't asking for *enough*, as the same article explains:

Mr Castendyk concludes that even if the envisaged auxiliary copyright protection for newspaper language enters into law, the resulting additional revenue streams probably would be insufficient to rescue the publishing companies. He then goes a step further and postulates that publishing companies enjoy a quasi-constitutional guarantee due to their role in the society insofar the state has the obligation to maintain the conditions for their existence forever.

...

Utilising the leveraging effect of this postulated quasi-constitutional guarantee, Castendyk demands to amend cartel law in order to enable a global 'pooling' of all exclusive rights of all newspaper publishers in Germany in order to block any attempt to defect from the paywall cartell by single competitor as discussed above.

This is a beautiful demonstration of a flaw at the heart of copyright: whenever an existing business model based around a monopoly starts to fail, the reflexive approach is to demand yet more monopolies in an attempt to shore it up. And the faster people point out why that won't solve the problem, the faster the demands come for even more oppressive and unreasonable legislation to try to head off those issues.

And make no mistake: if Germany adopts this approach, there will be squeals from publishers around the world demanding "parity", just as there have been with the term of copyright. And so the ratchet will be turned once more.

Follow me @glynmoody on Twitter or identi.ca.

EU's Standard Failure on Standards

Let's be frank: standards are pretty dull; but they are also important as technological gatekeepers. As the shameful OOXML saga showed, gaining the stamp of approval can be so important that some are prepared to adopt practically any means to achieve it; similarly, permitting the use of technologies that companies claim are patented in supposedly open standards can shut out open source implementations completely.

Against that background, the new EU report “Standardization for a competitive and innovative Europe: a vision for 2020” [.pdf] is a real disappointment. For something that purports to be looking forward a decade not even to mention “open source” (as far as I can tell) is an indication of just how old-fashioned and reactionary it is. Of course that omission is all of a piece with this attitude to intellectual monopolies:

The objective is to ensure licences for any essential IPRs contained in standards are provided on fair, reasonable and non-discriminatory conditions (FRAND). In practice, in the large majority of cases, patented technology has been successfully integrated into standards under this approach. On this basis, standards bodies are encouraged to strive for improvements to the FRAND system taking into consideration issues that occur over time. Some fora and consortia, for instance in the area of internet, web, and business process standards development have implemented royalty-free policies (but permitting other FRAND terms) agreed by all members of the respective organisation in order to promote the broad implementation of the standards.

This is clearly heavily biased towards FRAND, and clearly hints that royalty-free regimes are only used by those long-haired, sandal-wearing hippies out on the Well-Weird Web.

But as readers of this blog well know, FRAND is simply incompatible with free software; and any standard that adopts FRAND locks out open source implementations. That this is contemplated in the report is bad enough; that it is not even acknowledged as potential problem is disgrace. (Via No OOXML.)

Follow me @glynmoody on Twitter or identi.ca.

Can You Make Money from Free Stuff?

Well, of course you can – free software is the primary demonstration of that. But that doesn't mean it's trivial to turn free into fee. Here's an interesting move that demonstrates that quite nicely.

On Open Enterprise blog.

17 June 2010

Red Letter Day for ACTA in EU: Let's Use It

This week is one of the magic "plenary" ones in the European Parliament:

Only during the plenary weeks of June 14-17 and July 5-8 the MEPs will have an occasion to pass by the written declarations table, on their way to the plenary, to sign WD12 (at 12:00 on Tuesday, Wednesday and Thursday are the vote sessions, where every MEP should be present). It is therefore crucial that they are properly informed about the importance of signing it, right before moving to the plenary.

That WD12, to remind you:

Written declaration on the lack of a transparent process for the Anti-Counterfeiting Trade Agreement (ACTA) and potentially objectionable content

The European Parliament,

– having regard to Rule 123 of its Rules of Procedure,

A. whereas negotiations concerning the Anti-Counterfeiting Trade Agreement (ACTA) are ongoing,

B. whereas Parliament’s co-decision role in commercial matters and its access to negotiation documents are guaranteed by the Lisbon Treaty,

1. Takes the view that the proposed agreement should not indirectly impose harmonisation of EU copyright, patent or trademark law, and that the principle of subsidiarity should be respected;

2. Declares that the Commission should immediately make all documents related to the ongoing negotiations publicly available;

3. Takes the view that the proposed agreement should not force limitations upon judicial due process or weaken fundamental rights such as freedom of expression and the right to privacy;

4. Stresses that economic and innovation risks must be evaluated prior to introducing criminal sanctions where civil measures are already in place;

5. Takes the view that internet service providers should not bear liability for the data they transmit or host through their services to an extent that would necessitate prior surveillance or filtering of such data;

6. Points out that any measure aimed at strengthening powers of cross-border inspection and seizure of goods should not harm global access to legal, affordable and safe medicines;

7. Instructs its President to forward this declaration, together with the names of the signatories, to the Commission, the Council and the parliaments of the Member States.

So now would be a good time to contact your MEPs. If you want to find out who is still sitting on the naughty step as far as WD12 and ACTA is concerned, there's a good list from La Quadrature, complete with email and telephone numbers.

Follow me @glynmoody on Twitter or identi.ca.

15 June 2010

Are Software Patents Patently Dangerous Enough?

Earlier this week I wrote about a useful study of the economics of copyright, pointing out that we need more such analyses in order to adopt a more rational, evidence-based approach to drafting laws in this area. Of course, precisely the same can be said of patents, and software patents in particular, so it's always good to come across work such as this newly-published doctoral dissertation [.pdf]: “The effects of software patent policy on the motivation and innovation of free and open source developers.”

On Open Enterprise blog.

14 June 2010

Shame on Ofcom, Double Shame on the BBC

Readers with good memories may recall a little kerfuffle over an Ofcom consultation to slap DRM on the BBC's HD service:

if this scheme is adopted it is highly unlikely free software projects will be able to obtain the appropriate keys, for the simple reason that they are not structured in a way that allows them to enter into the appropriate legal agreements (not least because they couldn't keep them). Of course, it will probably be pretty trivial for people to crack the encryption scheme, thus ensuring that the law-abiding free software users are penalised, while those prepared to break the law are hardly bothered at all.

On Open Enterprise blog.

Abundance Obsoletes Peer Review, so Drop It

Recently, I had the pleasure of finally meeting Cameron Neylon, probably the leading - and certainly most articulate - exponent of open science. Talking with him about the formal peer review process typically employed by academic journals helped crystallise something that I have been trying to articulate: why peer review should go.

A recent blog post has drawn some attention to the cost - to academics - of running the peer review process:

So that's over £200million a year that academics are donating of their time to the peer review process. This isn't a large sum when set against things like the budget deficit, but it's not inconsiderable. And it's fine if one views it as generating public good - this is what researchers need to do in order to conduct proper research. But an alternative view is that academics (and ultimately taxpayers) are subsidising the academic publishing to the tune of £200 million a year. That's a lot of unpaid labour.

Indeed, an earlier estimate put the figure even higher:

a new report has attempted to quantify in cash terms exactly what peer reviewers are missing out on. It puts the worldwide unpaid cost of peer review at £1.9 billion a year, and estimates that the UK is among the most altruistic of nations, racking up the equivalent in unpaid time of £165 million a year.

Whatever the figure, it is significant, which brings us on to the inevitable questions: why are researchers making this donation to publishers, and do they need to?

The thought I had listening to Neylon talk about peer review is that it is yet another case of a system that was originally founded to cope with scarcity - in this case of outlets for academic papers. Peer review was worth the cost of people's time because opportunities to publish were rare and valuable and needed husbanding carefully.

Today, of course, that's not the case. There is little danger that important papers won't see the light of day: the nearly costless publishing medium of the Internet has seen to that. Now the problem is dealing with the fruits of that publishing abundance - making such that people can find the really important and interesting results among the many out there.

But that doesn't require peer review of the kind currently employed: there are all kinds of systems that allow any scientist - or even the general public - to rate content and to vote it up towards a wider audience. It's not perfect, but by and large it works - and spreads the cost widely to the point of being negligible for individual contributors.

For me what's particularly interesting is the fact that peer review is unnecessary for the same reason that copyright and patents are unnecessary nowadays: because the Internet liberates creativity massively and provides a means for bringing that flood to a wider audience without the need for official gatekeepers to bless and control it.

Follow me @glynmoody on Twitter or identi.ca.

The Economics of Copyright

One of the problems with the debate around copyright is that it is often fuelled more by feelings than facts. What is sorely lacking is a hard-nosed look at key areas like the economics of copyright. Enter "The Economics of Copyright and Digitisation: A Report on the Literature and the Need for Further Research” [.pdf].

On Open Enterprise blog.

11 June 2010

Why GNU/Linux is Unmatched – and Unmatchable

Users of free software are nothing if not passionate. Most of them care deeply about the code they use, and will happily plunge into the flamewars that flare up regularly across the Web. The core focus of those arguments is well established by now: against Mac fans, it's about the virtues of true openness and freedom; against Windows fans (do they still exist?) it's about those, as well as security, speed, stability, etc. But there's another aspect that rarely gets discussed, and yet it represents one of GNU/Linux's greatest strengths: the breadth of hardware platforms supported.

On The H Open.

Why No Billion-Dollar Open Source Companies?

Last week, I met up with Jim Whitehurst, Red Hat's CEO. He gave a very fluent presentation to a group of journalists that ran through Red Hat's business model, and explained why – unsurprisingly – he was optimistic about his company's future growth.

On Open Enterprise blog.

07 June 2010

Why the iPhone Cannot Keep up with Android

Although I have never owned an iPhone, nor even desired one, I do recognise that it has redefined the world of smartphones. In that sense, it is the leader, and will always be historically important. However, as my title suggests, I don't think that's enough to keep it ahead of Android, however great you may judge the feature gap to be currently. Here's a good explanation of why that is:

Through a bevy of handset makers, Android can offer a variety of phones that will make it difficult for Apple to beat with just one hardware release a year. While it is hard to ever go wrong with an iPhone, Android offers a ton of alternative form factors, price points and carriers: Sprint (NYSE: S) has released the first 4G phone on Android; T-Mobile has a new competitive Android phone with a slide-out keyboard; the HTC Incredible sold by Verizon has been flying off store shelves; and even Google’s Nexus One still boasts some of the latest hardware. Not to mention new Android phones from Samsung and LG (SEO: 066570) coming later this summer.

The thing is, no matter how amazing any given feature of the iPhone, in any iteration, sooner or later (and probably sooner) there will be an Android smartphone that matches it. And alongisde that handset will be dozens of others offering other features that the iPhone hasn't yet implemented - and may never do.

It's an unfair race: iPhone iterations, even blessed by Steve Jobs' magic pixie dust, can only occur so fast; Android innovations, by contrast, are limited only by the number of players in the market. Want a new Android handset ever week? Easy, just wait until the ecosystem grows a little more.

And don't even get me started on the fact that the Android code is already starting to appear in totally new segments, bringing yet more innovation, yet more players....

Follow me @glynmoody on Twitter or identi.ca.

Grokking Green IT - and why Open Source Helps

One of the pardoxes at the heart of computing is that for all its power to improve the world, in one respect it is doing the opposite, thanks to its apparently insatiable appetite for electricity. As we are becoming increasingly aware, most electricity produced today has serious negative consequences for the environment, and so the more we use and depend on computers for our daily lives, the more we damage our planet.

On Open Enterprise blog.

06 June 2010

Why Sharing Will Be Big Business

As you may have noticed, one of the central themes of this blog is the power of sharing. Mostly, I talk about non-rivalrous goods like software or music: here, sharing is a no-brainer, because copies can be made for almost zero cost, allowing everyone to share a digital resource. But what about the world of analogue *rivalrous* goods - the traditional kind of stuff we are most used to in everyday life?

Here, sharing is harder to arrange, since you need someone to lend something to another party, which requires organisation in the physical world. And where there is friction, there is a business opportunity in terms of making reducing that friction. Here's a perfect example of that:

Chegg may very well be the fastest-growing, most successful, second-generation e-commerce startup that you hardly ever hear about,except maybe for the fact that it’s raised more than $140 million. Chegg is the “Netflix for textbooks.” It lets students across 6,400 college campuses rent from a virtual bookstore containing 4.2 million books. Based on my analysis (which I get into more detail below), the company is on track to generate $130 million in revenues in 2010, up from $25 million in 2009, and $10 million in 2008. During the January, 2010 semester, I estimate the company made close to $1 million in revenue a day, up fivefold from $200,000/day the previous January, and it should double that this coming September. My analysis suggests Chegg will do close to $50 million in revenue this September alone. It is underappreciated, to say the least.

The article goes on to point out the larger implications of Chegg's success:

Chegg is disintermediating the $5B+ college textbook market by providing a low-cost, short-term, nationwide rental alternative to the high-priced university bookstore. This disruptive model will likely shrink industry revenues by half in the coming years, with Chegg in a leadership position to command 80%+ market share. The key questions, of course, are: 1) Is this a winner-take-all market, 2) What can Chegg do to fend off the likes of the major bookstore owners, Barnes & Noble and Follet, as well as Amazon and Apple, and 3) Is Chegg a harbinger of a new age of startup rental services?

In answer to that last question, no and yes: I don't think we should regard this as old-style rental over the Internet, but a new kind of sharing where people spread the cost of rivalrous goods. However you look at it, though, it is going to be big.

Follow me @glynmoody on Twitter or identi.ca.

05 June 2010

What's the Point of Hacktivism?

Thanks to the Internet, it's easy to engage in big issues - environmental crises, oppression, injustice. Too easy: all it takes is a click and that email is winging its way to who knows where, or that tasteful twibbon has been added to your avatar. If you still think this helps much, try reading Evgeny Morozov's blog Net Effect, and you will soon be disabused (actually, read it anyway - it's very well written).

So what's the point? Well, there are various things that such hacktivism can achieve, nicely laid out in this piece by Ethan Zuckerman called "Overcoming apathy through participation? – (not) my talk at Personal Democracy Forum". But there was one idea that I particularly liked - not least because I hadn't come across it before:


If we assume that activism, as with almost everything else online, has a Pareto distribution, we might assume that for every 1000 relatively passive supporters, we might find 10 deeply engaged activists and one emerging movement leader. And if the contention that participation begets passion, this particular long tail might be a slippery slope upwards, yielding more leaders than the average movement.

Astute readers will have noted that this is one of the reasons why the open source methodology is so successful: it allows natural leaders to emerge from participants. We've seen how amazingly powerful that is, not least in empowering people who in the past would never have been given opportunities to show what they can do. And that, for me, is reason enough to carry on with this hacktivism lark, in the hope that something similar can happen in other spheres.

Follow me @glynmoody on Twitter or identi.ca.

04 June 2010

Please Help Fight EU Search Engine Surveillance

Just the other day I wrote about the foolishness that was the Gallo Report – which, alas, seems to have gone through with all its excesses. And as if that weren't enough, here's yet another problem that we need to address:

On Open Enterprise blog.

Does the Bill & Melinda Gates Foundation Get the Web?

Bill Gates's decision to move away from day-to-day running of Microsoft was doubly shrewd. First, because it allowed him to leave when his company was at its apogee, and to avoid association with its current - inevitable - decline (notice how the meme that Microsoft is irrelevant is becoming widespread?) And secondly, it enabled him to help Microsoft extend its reach - especially in developing countries - by other means, while earning plaudits for his charitable work.

Unpicking the complex weft and weave of philanthropy and self-interest at the Bill & Melinda Gates Foundation would require an entire book (and no, don't worry, I won't be writing it). Rather than plunging into that maelstrom, I wanted to pick up an extraordinary aspect of the Foundation's site, spotted by Thierry Stoehr.

It's rather telling that the Terms of Use for the Bill & Melinda Gates Foundation run to no less than *seven* pages when printed out (who knew that using the Web was such a complicated and risky operation?). But even more extraordinary is the following clause:

Our Links to Other Sites: Our Site may contain links to Web sites of third parties. We provide these links as a convenience, but do not endorse the linked site or anything on it. While their information, products, services and information may be helpful to you, they are independent entities and we do not control or endorse them. You agree that any visits to linked sites are at your own risk and governed by their privacy policies (if any).

Your Links to Our Site: You are not permitted to link or shortcut to our Site from your Web site, blog or similar application, without obtaining prior written permission from us.

Which is worse: the hypocrisy or the cluelessness? It's a tough call....

Follow me @glynmoody on Twitter or identi.ca.

03 June 2010

Why Patents are Like Black Holes

When a big enough star dies, it generally implodes, and forms a voracious black hole capable of swallowing anything that comes too close. When a big enough company dies, all that remains is a bunch of patents that can have a similarly negative effect on companies whose business models are too close.

He's Mike Masnick's commentary on the area:

It looks like just about all that's left of former telco equipment giant Nortel is a whole bunch of patents, that are now expected to sell for somewhere in the range of $1.1 billion. The big question, of course, is who ends up with those patents, and what they do with them. Generally speaking, you don't see companies spend $1.1 billion on a bunch of patents, unless they're planning something big. It's entirely possible someone will buy them for defensive purposes, but equally likely that they're used to sue lots of other companies (or, perhaps by the likes of Intellectual Ventures, to scare people into paying up to avoid the possibility of being sued).

And of course, in the field of open source, the really worrying dying star is Novell, as Matt Asay points out:

As reported, as many as 20 organizations have registered bids for Novell, most (or all) of them private equity firms. While an Oracle or a Cisco might acquire Novell for its maintenance streams and product portfolio, it's unclear that private equity firms will have the same motivation. For at least some of these, there will be serious pressure to sell Novell's assets to the highest bidder, regardless of the consequences to Novell's existing customers or to the wider industry.

This wouldn't be so bad if it weren't for the fact that Novell has a treasure trove of patents, with at least 450 patents related to networking, office productivity applications, identity management, and more.

Worth noting is that among those patents are some relating to Unix...

These cases show yet again why patents just don't do what they are supposed to - encourage innovation - but act as very serious threats to other companies that *are* innovating. As more and more of these software stars die, so the number of patent black holes will increase, and with them the unworkability of the patent system. Time to reboot that particular universe...

Follow me @glynmoody on Twitter or identi.ca.

Why "Naked Transparency" Has No Clothes

Although I have a great deal of time (and respect) for Lawrence Lessig, I think his article "Against Transparency" is fundamentally misguided. And for the same reason I think these concerns are overblown, too:

The coming wave of transparency could transform this in a hugely positive way, using open data on costs, opportunities and performance to become a much more creative, cost-effective and agile institution, mindful of the money it spends and the results it achieves, and ensuring individuals are accountable for their work.

But it might make things worse, frightening senior managers into becoming more guarded, taking fewer ‘risks’ with even small amounts of money, and focusing on the process to the detriment of the outcome. It may also make public service less attractive not only for those with something to hide, but for effective people who don’t want to spend their time fending off misinterpretations of their decisions and personal value for money in the media. And to mirror Lessig’s point, it may push confidence in public administration over a cliff, in revealing evidence of wrongdoing which in fact is nothing of the sort.

First of all, I think we already have a data point on such radical transparency. Open source is conducted totally in the open, with all decisions being subject to challenge and justification. That manifestly works, for all its "naked transparency".

Now, politics is plainly different in certain key respects, not least because hackers are different from politicians, and there has been a culture of *anti*-openness among the latter. But I think that is already changing, as David Cameron's latest billet doux to opening up indicates:

the release of the datasets specified in the Coalition Programme is just the beginning of the transparency process. In advance of introducing any necessary legislation to effect our Right to Data proposals, public requests to departments for the release of government datasets should be handled in line with the principles underpinning those proposals: a presumption in favour of transparency, with all published data licensed for free reuse.

Now, I am not so naive as to believe that all will be sweetness and light when it comes to opening up government; nor do I think that open goverment is "done": this is the beginning or the journey, not the end. But it is undeniable that a sea change has occurred: openness is (almost) the presumption. And the closer we move to that state, the more readily politicians will work within that context, and more natural transparency - even of the naked kind - will become.

Moreover, shying away from such full-throated openness because of concerns that it might frighten the horses is a sure way to ensure that we *don't* complete this journey. Which is why I think concerns about "naked transparency" are not just wrong, but dangerous, since they threaten to scupper the whole project by starting to carve out dangerous exceptions right at its heart.

Follow me @glynmoody on Twitter or identi.ca.

02 June 2010

Open Sourcing Politics

“Linux is subversive”: so begins “The Cathedral and the Bazaar,” Eric Raymond's analysis of the open source way. The subversion there was mainly applied to the world of software, but how much more subversive are the ideas that lie behind open source when applied to politics.

On Open Enterprise blog.

01 June 2010

GNU/Linux *Does* Scale – and How

As everyone knows, GNU/Linux grew up as a project to create a completely free alternative to Unix. Key parts were written by Richard Stallman while living the archetypal hacker's life at and around MIT, and by Linus Torvalds – in his bedroom. Against that background, it's no wonder that one of Microsoft's approaches to attacking GNU/Linux has been to dismiss it on technical grounds: after all, such a rag-bag of code written by long-haired hippies and near-teenagers could hardly be compared with the product of decades of serious, top-down planning by some of best coding professionals money can buy, could it?

On Open Enterprise blog.

31 May 2010

Transparency is in WikiLeaks' DNA

It is somewhat ironic that the man behind WikiLeaks, Julian Assange, is not a fan of being in the spotlight; and therefore perhaps poetic justice that he is increasingly the focus of in-depth profiles. The best one so far has just appeared in The New Yorker, and includes this memorable description:

WikiLeaks receives about thirty submissions a day, and typically posts the ones it deems credible in their raw, unedited state, with commentary alongside. Assange told me, “I want to set up a new standard: ‘scientific journalism.’ If you publish a paper on DNA, you are required, by all the good biological journals, to submit the data that has informed your research—the idea being that people will replicate it, check it, verify it. So this is something that needs to be done for journalism as well. There is an immediate power imbalance, in that readers are unable to verify what they are being told, and that leads to abuse.” Because Assange publishes his source material, he believes that WikiLeaks is free to offer its analysis, no matter how speculative.

I'm sure Sir John Sulston had no idea how far his idea of openness would be taken when he drew up the Bermuda Principles....

Urgent: Contact MEPs on the EU's Unbalanced Copyright Report

You would have thought that what with local initiatives like the Digital Economy Act and global ones like ACTA, the copyright maximalists would be satisfied with the range and number of attacks on the Internet and people's free use of it; but apparently not. For here comes the Gallo Report, an attempt to commit the European Union to criminalisation of copyright infringement and a generally more repressive approach to online activities.

A key vote on the Gallo Report takes place tomorrow, so we need to act today and (early) tomorrow if we want to stand a chance of making it more fair and balanced. The best site for information about this is La Quadrature du Net, which summarises the Gallo Report as follows:

On Open Enterprise blog.

27 May 2010

Let's Make the Visually Impaired Full Digital Citizens

As I wrote recently in my Open... blog, copyright is about making a fair deal: in return for a government-supported, time-limited monopoly, creators agree to place their works in the public domain after that period has expired. But that monopoly also allows exceptions, granted for various purposes like the ability to quote limited extracts, or the ability to make parodies (details depend on jurisdiction.)

On Open Enterprise blog.

26 May 2010

Dual-Screen Tablets: the Next Hot Form-Factor?

As new technologies arrive, and the cost of hardware components fall, innovative designs become possible Here's one that looks promising: a dual-screen Android tablet.


The two screens of the enTourage eDGe interact so that users can open hyperlinks that are included in an e-book text and view the content on the LCD screen, or ‘attach’ Web pages to passages in an e-book to be referenced at a later point. Additionally, as the enTourage eDGe uses E-Ink technology for easy digital reading, images will appear in gray-scale on the e-paper side of the device; however, users can load these in color on the LCD side, ideal for viewing colored charts and graphs from course materials.

Is this really useful, or am I just easily impressed by shiny?

Follow me @glynmoody on Twitter or identi.ca.

How They Stole the Public Domain

Part of the quid pro quo of copyright is that works are supposed to enter the public domain after a limited period of monopoly protection. Trouble is, the copyright maximalists and their friends in power have managed to keep jacking up that period, meaning that more and more of our cultural heritage is locked away for decades, released only long after the death of the author.

Rufus Pollock has now quantified how much we are losing:


if copyright had stayed at its Statute of Anne level, 52% of the books available today would in the public domain compared to an actual level of 19%. That’s around 600,000 additional items that would be in the public domain including works like Virginia Woolf’s (d. 1941) the Waves, Salinger’s Catcher in the Rye (pub. 1951) and Marquez’s Chronicle of a Death Foretold (pub. 1981).

For comparison, in 1795 78% of all extant works were in the public domain. A figure which we’d be close to having if copyright was a simple 15 years (in that case the public domain would be a substantial 75%).

Imagine what today's artists could have done with free access to all those works: it's not just the past's creativity that's been stolen, but the present's too.

Follow me @glynmoody on Twitter or identi.ca.

25 May 2010

Goodbye Becta – and Good Riddance

Not quite on the scale of cancelling the ID cards project, the news that Becta would be shut down was nonetheless further evidence of the coalition government's new broom whooshing into action. Although there seems to be a wide range of views on whether this is a good or bad thing – see this post and its comments for a representative selection – for me Becta was pretty much an unmitigated disaster for free software in this country, and I'm glad to see it go.

On Open Enterprise blog.

24 May 2010

Hacking through the Software Patent Thickets

Most people in the hacking community are well aware that patents represent one of the most serious threats to free software. But the situation is actually even worse than it seems, thanks to the proliferation of what are called patent thickets. To understand why these are so bad, and why they represent a particular problem for software, it is necessary to go back to the beginning of patent law.

On The H Open.

Spreading the Word about Open Government Data

One of the most amazing - and heartening - developments in the world of openness recently has been the emergence of the open government movement. Although still in its early stages, this will potentially have important ramifications for business, since one of the ideas at its heart is the opening up of government datasets for anyone to use and build on - including for commercial purposes (depending on the particular licences). The UK and US are leading the way in this sphere, and an important question is to what extent the experiences of these two countries can be generalised.

On Open Enterprise blog.

21 May 2010

Are Trade Secrets and Trademarks the Future?

Last week I wrote a piece about analogue copying. Specifically, it centred on the 3D scanning and copying of an Aston Martin – because that was how somebody framed the question to me. This provoked plenty of thoughtful comment, which I encouraged people to post over on my other blog, since a slightly longer format was needed than this blog could accommodate. However, because the original piece was posted here, I've decided to reply to them here (sorry if this bloggy to-ing and fro-ing causes digital travel sickness.)

On Open Enterprise blog.

19 May 2010

Should *Mozilla* Fork Firefox?

Apparently, there's an interesting thread over on a site called Quora about the future of Firefox. I say apparently, since I can't seem to join the site (“we'll e-mail when we're ready for you to try out the service” - thanks a bunch: obviously it's only for the Chosen Few). Anyway, according to TechCrunch, the meat of the argument is this:

On Open Enterprise blog.

18 May 2010

Spot(ify) the Trend

One of the reasons that digital music will be free - whether the recording companies want it or not - is basic economics: the marginal cost is practically zero, which means that the price will tend to that point, too. And now we have this:

Spotify is slashing the cost of its advert-free music streaming in the UK and Europe, in a bid to win more paying customers besides just mobile users. It comes in two new tariffs Spotify’s introducing…

—Spotify Unlimited: £4.99pm/ for no-ads music, but no mobile access, no offline or MP3 play and no higher-bitrate quality.

—Spotify Open: Free, with ads, no invite required, but no mobile, no offline or MP3 play, no higher-quality and limited to 20 hours a month.

What's interesting here is that Spotify has already been accused of not paying artists much for each play: this new pricing scheme is likely to mean their fees won't be going up anytime soon. The sooner artists use free digital music to enable them to make money from analogue scarcity, the better.

Follow me @glynmoody on Twitter or identi.ca.

17 May 2010

Diaspora: The Future of Free Software Funding?

A couple of weeks ago I wrote about Diaspora, a free software project to create a distributed version of Facebook that gives control back to users. Since then, of course, Facebook-bashing and Diaspora-boosting have become somewhat trendy. Indeed, Diaspora has now soared past its initial $10,000 fund-raising target: at the time of writing, it has raised over $170,000, with 15 days to go. That's amazing, but what's more interesting is the way in which Diaspora has done it.

On Open Enterprise blog.

16 May 2010

Collateral Murder, Collateral Damage

If you haven't seen the shocking but important video "Collateral Murder", which shows the callous gunning-down of Iraqi citizens (and the subsequent rocket attack on a civilian van with children inside), don't miss it on Wikileaks, its original source. Unfortunately, you may not find it on YouTube or other obvious video sites, since they have been taken down (although YouTube is back now, apparently).

A clear example of censorship, you might think, but in some respects its even worse:


Collateral Murder, with over 6M views, removed from YouTube after unknown US copyright claim http://bit.ly/aS3bMk

That's right, it was taken down on the basis of alleged copyright infringement, not because somebody thought it too shocking to be displayed. The idea that such an action would be taken because of an alleged infringement on somebody's monopoly, while the underlying cold-blooded massacre of Iraqi civilians is swept under the carpet, is of course, repulsive. But it's just another effect of the outdated law that is copyright - collateral damage, so to speak.

After all, copyright grew up in England for the purpose of controlling the flow of information, by allowing people to become itss "owners" - and hence a convenient throttle point:

The first copyright law was a censorship law. It had nothing to do with protecting the rights of authors, or encouraging them to produce new works. Authors' rights were in no danger in sixteenth-century England, and the recent arrival of the printing press (the world's first copying machine) was if anything energizing to writers. So energizing, in fact, that the English government grew concerned about too many works being produced, not too few. The new technology was making seditious reading material widely available for the first time, and the government urgently needed to control the flood of printed matter, censorship being as legitimate an administrative function then as building roads.

It should come as no surprise that copyright is still being used for the purposes of censorship - although often dressed up as if it were somehow "merely" a commercial issue (how that can be the case when we're talking about battleground footage is hard to see.)

This kind of abuse is one more reason why we need to abolish copyright completely: it is not only irrelevant to true creativity (artists don't need an "incentive" to create - they *have* to because of an inner compulsion), it is increasingly a threat to liberty in the online world.

Anyone who doubts that should look at the kind of clauses included in anti-piracy legislation like the Digital Economy Act, which allows websites to be blocked if they are alleged to hold material that infringes on someone's copyright. That will effectively allow the UK government to take down any leak of its documents, since there is no public interest defence in the Act. Had this been introduced as a law explicitly to block such leaks, there would have been an uproar over the censorship it implied; disguised as something to "protect" the poor creative artists, it passes with only protests from the usual troublemakers (like me). The stronger the copyright enforcement, the greater the scope for censorship: it's as simple as that.

Follow me @glynmoody on Twitter or identi.ca.

14 May 2010

Digital Economy Act: Some Unfinished Business

Remember the Digital Economy Act? Yes, I thought you might. It's still there, hanging like a proverbial sword of Damocles over our digital heads. But a funny thing happened on the way to the forum, er, Houses of Parliament: that nice Mr Clegg found himself catapulted to a position of some power. Now, what was it he said a month ago?

On Open Enterpries blog.

Should We Allow Copies of Analogue Objects?

I write a lot about copyright, and the right to share stuff. In particular, I think that for digital artefacts – text, music, video etc. - free software has shown us that there is no contradiction between allowing these to be copied freely and creating profitable businesses that are powered by that abundance. What has to change, though, is the nature of the business models that underlie them.

The parallel between digital content and software is obvious enough, which makes it relatively easy to see how media companies might function against a background of unrestricted sharing. But we are fast approaching the point where it is possible to make copies of *analogue* objects, using 3D printers like the open source RepRap system. This raises some interesting questions about what might be permitted in that situation if businesses are still to thrive.

On Open Enterprise blog.

13 May 2010

How to Become Linus Torvalds

Most people in the free software world know about the famous “LINUX is obsolete” thread that began on the comp.os.minix newsgroups in January 1992, where Andrew Tanenbaum, creator of the MINIX system that Linus used to learn about operating system design, posted the following rather incendiary comment:

On The H Open.

European Commission Betrays Open Standards

Just over a month ago I wrote about a leaked version of the imminent Digital Agenda for Europe. I noted that the text had some eminently sensible recommendations about implementing open standards, but that probably for precisely that reason, was under attack by enemies of openness, who wanted the references to open standards watered down or removed. Judging by the latest leak [.pdf] obtained by the French site PC Inpact, those forces have prevailed: what seems to be the final version of the Digital Agenda for Europe is an utter travesty of the original intent.

On Open Enterprise blog.

10 May 2010

Read What Simon Says

It's a red letter day here on Computerworld UK, for the open source section just gained an important new strand in the form of Simon Says, a blog from Simon Phipps:

On Open Enterprise blog.

British Sense of Humour? Not So Much

What a sad, sad day for this country:


A trainee accountant who posted a message on Twitter threatening to blow an airport "sky high" has been found guilty of sending a menacing electronic communication.

Now, the judge may not know this, but there's a technical term for this kind of tweet: it's what we Internet johnnies call a "joke"...

The truly sickening part of this judgement is the following:

a district judge at Doncaster Magistrates Court ruled that the Tweet was "of a menacing nature in the context of the times in which we live".

In other words, our society has become so corrupted by the cynical abuse of the idea of "terror" that we have lost all sense of proportion, not to mention humour. Tragic - and dangerous, since it is bound to have a chilling effect on Twitter in this country.

Follow me @glynmoody on Twitter or identi.ca.

06 May 2010

Copyright: a Conditional Intellectual Monopoly

Here's a nice move from the Internet Archive:


More than doubling the number of books available to print disabled people of all ages, today the Internet Archive launched a new service that brings free access to more than 1 million books — from classic 19th century fiction and current novels to technical guides and research materials — now available in the specially designed format to support those who are blind, dyslexic or are otherwise visually impaired.

And here's a nice analysis of that move:

The new service demonstrates the principle behind the Chafee Amendment: that copyright is a conditional monopoly, not a property right, and that when we decide the monopoly is hampering an important public purpose, we can change it. The Chafee Amendment is an open acknowledgement that monopoly-based distribution was not serving the needs of the blind, the visually impaired, or or dyslexic people very well, and that fixing that situation is simply a policy decision. It reminds us that copyright itself is a policy decision, and that if it is not serving the public well, we can change the policy.

A double win, then: for the visually impaired, and in terms of reminding us about the true nature of copyright as a conditional intellectual monopoly.

Follow me @glynmoody on Twitter or identi.ca.

Diaspora: Freedom in the Cloud?

One of the key thinkers in the free software world is Eben Moglen. He's been the legal brains behind the most recent iterations of the GNU GPL, but more than that, he's somebody who has consistently been able to pinpoint and articulate the key issues facing free software for two decades. Recently, he did it again, noting that cloud computing is a huge threat to freedom.

On Open Enterprise blog.

05 May 2010

The GNU/Linux Code of Life

After I published Rebel Code in 2001, there was a natural instinct to think about writing another book (a natural masochistic instinct, I suppose, given the work involved.) I decided to write about bioinformatics – the use of computers to store, search through, and analyse the billions of DNA letters that started pouring out of the genomics projects of the 1990s, culminating in the sequencing of the human genome in 2001.

One reason I chose this area was the amazing congruence between the battle between free and closed-source software and the fight to place genomic data in the public domain, for all to use, rather than having it locked up in proprietary databases and enclosed by gene patents. As I like to say, Digital Code of Life is really the same story as Rebel Code, with just a few words changed.

Another reason for the similarity between the stories is the fact that genomes can be considered as a kind of program – the “digital code” of my title. As I wrote in the book:

In 1953, computers were so new that the idea of DNA as not just a huge digital store but a fully-fledged digital program of instructions was not immediately obvious. But this was one of the many profound implications of Watson and Crick's work. For if DNA was a digital store of genetic information that guided the construction of an entire organism from the fertilised egg, then it followed that it did indeed contain a preprogrammed sequence of events that created that organism – a program that ran in the fertilised cell, albeit one that might be affected by external signals. Moreover, since a copy of DNA existed within practically every cell in the body, this meant that the program was not only running in the original cell but in all cells, determining their unique characteristics.

That characterisation of the genome is something of a cliché these days, but back in 2003, when I wrote Digital Code of Life, it was less common. Of course, the interesting question is: to what extent is the genome *really* like an operating system? What are the similarities and differences? That's what a bunch of researchers wanted to find out by comparing the Linux kernel's control structure to that of the bacterium Escherichia coli:

The genome has often been called the operating system (OS) for a living organism. A computer OS is described by a regulatory control network termed the call graph, which is analogous to the transcriptional regulatory network in a cell. To apply our firsthand knowledge of the architecture of software systems to understand cellular design principles, we present a comparison between the transcriptional regulatory network of a well-studied bacterium (Escherichia coli) and the call graph of a canonical OS (Linux) in terms of topology and evolution.

We show that both networks have a fundamentally hierarchical layout, but there is a key difference: The transcriptional regulatory network possesses a few global regulators at the top and many targets at the bottom; conversely, the call graph has many regulators controlling a small set of generic functions. This top-heavy organization leads to highly overlapping functional modules in the call graph, in contrast to the relatively independent modules in the regulatory network.

We further develop a way to measure evolutionary rates comparably between the two networks and explain this difference in terms of network evolution. The process of biological evolution via random mutation and subsequent selection tightly constrains the evolution of regulatory network hubs. The call graph, however, exhibits rapid evolution of its highly connected generic components, made possible by designers' continual fine-tuning. These findings stem from the design principles of the two systems: robustness for biological systems and cost effectiveness (reuse) for software system.

The paper's well-worth reading, but if you find it heavy going (it's really designed for bioinformaticians and their ilk), there's an excellent, easy-to-read summary and analysis by Carl Zimmer in Discover magazine. Alternatively, you could just buy a copy of Digital Code of Life...

Follow me @glynmoody on Twitter or identi.ca.

How Do You Make a Pentaho?

Where do open source companies come from? That's not a trivial question, for free software startups can arise in all sorts of ways. You might create a company around someone else's software (as Red Hat, say, did); build one on software you've written yourself (like Jboss); pay people to write something from scratch (Alfresco); hire the creator of a program and use their software (Jaspersoft); or put together pre-existing projects to create something new.

On Open Enterprise blog.

04 May 2010

Patents, Patents, Everywhere...

...nor any stop to think.

Software patents are an issue that crops up fairly often on this blog, since they represent one of the principal threats to free software. But recently something seems to have got into the water, for the entire world, apparently, has gone software patent mad.

On Open Enterprise blog.