07 July 2010

Exploring Entitlement Economics

Bradley M. Kuhn has a thought-provoking post with the title "Proprietary Software Licensing Produces No New Value In Society". Here's a key section:

I've often been paid for programming, but I've been paid directly for the hours I spent programming. I never even considered it reasonable to be paid again for programming I did in the past. How is that fair, just, or quite frankly, even necessary? If I get a job building a house, I can't get paid every day someone uses that house. Indeed, even if I built the house, I shouldn't get a royalty paid every time the house is resold to a new owner. Why should software work any differently? Indeed, there's even an argument that software, since it's so much more trivial to copy than a house, should be available gratis to everyone once it's written the first time.

He then goes on to point out:

Thus, this line of reasoning gives me yet another reason to oppose proprietary software: proprietary licensing is simply a valueless transaction. It creates a burden on society and gives no benefit, other than a financial one to those granted the monopoly over that particular software program. Unfortunately, there nevertheless remain many who want that level of control, because one fact cannot be denied: the profits are larger.

For example, Mårten Mikos recently argued in favor of these sorts of large profits. He claims that to "benefit massively from Open Source" (i.e., to get really rich), business models like “Open Core” are necessary. Mårten's argument, and indeed most pro-Open-Core arguments, rely on this following fundamental assumption: for FLOSS to be legitimate, it must allow for the same level of profits as proprietary software. This assumption, in my view, is faulty. It's always true that you can make bigger profits by ignoring morality. Factories can easily make more money by completely ignoring environmental issues; strip mining is always very profitable, after all. However, as a society, we've decided that the environment is worth protecting, so we have rules that do limit profit maximization because a more important goal is served.

This analysis is cognate with my recent post about the absence of billion-dollar turnover open source companies: the fact is, as a pure-play free software outfit, you just can't make so much money as you can with proprietary software, because you generally have to sell scarce things like people's time, and that doesn't scale.

But the implications of this point are much wider, I think.

As Kuhn emphasies:

I'll just never be fully comfortable with the idea that workers should get money for work they already did. Work is only valuable if it produces something new that didn't exist in the world before the work started, or solves a problem that had yet to be solved. Proprietary licensing and financial bets on market derivatives have something troubling in common: they can make a profit for someone without requiring that someone to do any new work. Any time a business moves away from actually producing something new of value for a real human being, I'll always question whether the business remains legitimate.

This idea of getting money for work already done is precisely how copyright is regarded these days. It's not enough for a creator to be paid once for his or her work: they want to be paid every time it is performed or copies made of performances.

So ingrained is this idea that anyone suggesting the contrary - like that doughty young Eleanor - is regarded as some kind of alien from another planet, and is mocked by those whose livelihoods depend upon this kind of entitlement economics.

But just as open source has cut down the fat profits of proprietary software companies, so eventually will the exorbitant profits of the media industry be cut back to reasonable levels based on how much work people do - because, as Kuhn notes, there really is no justification for anything more.

Follow me @glynmoody on Twitter or identi.ca.

06 July 2010

Open Source: It's all LinkedIn

As I noted in my post “Why No Billion-Dollar Open Source Companies?", one of the reasons there are no large pure-play open source companies is that their business model is based on giving back to customers most of the costs the latter have traditionally paid to software houses.

On Open Enterprise blog.

05 July 2010

Jim Whitehurst is CEO and Chief Plumber at Red Hat

Jim Whitehurst, president and CEO of Red Hat, the oldest and by far the most successful company whose business is based purely around open source, makes no bones about it: “Selling free software is hard,” he says. In fact, he goes further: “Open source is not a business model; it's a way to develop software.”

On CIO.co.uk.

WWW: World Wide Wikipedia

I love Wikipedia. I love using it, frequently spending many a spare minute (that I don't actually have) simply wandering from one entry to another, learning things I never knew I never knew. I love it, too, as an amazing example of why sharing and openness work. For those who aren't programmers, and who therefore don't grok the evident rightness of the open source methodology, Wikipedia is a great way of explaining how it's done and why it's so good.

On Open Enterprise blog.

Welcome to Open Source Law

Since, as Larry Lessig famously pointed out, "code is law" (and vice versa), it's natural to try to apply open source methodologies in the legal world. Indeed, a site called Openlaw existed ten years ago:


Openlaw is an experiment in crafting legal argument in an open forum. With your assistance, we will develop arguments, draft pleadings, and edit briefs in public, online. Non-lawyers and lawyers alike are invited to join the process by adding thoughts to the "brainstorm" outlines, drafting and commenting on drafts in progress, and suggesting reference sources.

Building on the model of open source software, we are working from the hypothesis that an open development process best harnesses the distributed resources of the Internet community. By using the Internet, we hope to enable the public interest to speak as loudly as the interests of corporations. Openlaw is therefore a large project built through the coordinated effort of many small (and not so small) contributions.

Despite this long pedigree, open source law never really took off - until now. As this important post points out:

The case of British Chiropractic Association v Simon Singh was perhaps the first major English case to be litigated under the full glare of the internet. This did not just mean that people merely followed the case’s progress on blogs and messageboards: the role of the internet was more far-reaching than this


Crucially:

The technical evidence of a claimant in a controversial case had simply been demolished - and seen to be demolished - but not by the conventional means of ­contrary expert evidence and expensive forensic cross-examination, but by specialist bloggers. And there is no reason why such specialist bloggers would not do the same in a similar case.

The key thing is that those bloggers need to be engaged by the case - this isn't going to happen for run-of-the-mill litigation. But that's OK: it means that when something important is at stake - as in the Singh case - and their help is most needed, they *will* be engaged, and that wonderful digital kraken will stir again.

Follow me @glynmoody on Twitter or identi.ca.

02 July 2010

An (Analogue) Artist's Reply to Just Criticism

There's a new meme in town these days: “rights of the artists”. The copyright industries have worked out that cries for more copyright and more money don't go down too well when they come from fat-cat monopolists sitting in their plush offices, and so have now redefined their fight in terms of struggling artists (who rarely get to see much benefit from constantly extended copyright).

Here's a nice example courtesy of the Copyright Alliance – an organisation that very much pushes that line:

Songwriter, Jason Robert Brown, recently posted on his blog a story about his experience dealing with copyright infringement. Knowing for a long time that many websites exist for the sole purpose of “trading” sheet music, Jason decided to log on himself and politely ask many of the users to stop “trading” his work. While many quickly wrote back apologizing and then removing his work, one girl in particular gave Jason a hard time.

First of all, I must commend Mr Brown for the way he has gone about addressing this issue. As he explains on his blog, this is the message he sent to those who were offering sheet music of his compositions on a site:

Hey there! Can I get you to stop trading my stuff? It's totally not cool with me. Write me if you have any questions, I'm happy to talk to you about this. jason@jasonrobertbrown.com

Thanks,
J.

Now, that seems to me an eminently calm and polite request. Given that he obviously feels strongly about this matter, Mr Brown deserves kudos for that. As he explains:

The broad majority of people I wrote to actually wrote back fairly quickly, apologized sincerely, and then marked their music "Not for trade."

However, he adds:

there were some people who fought back. And I'm now going to reproduce, entirely unexpurgated, the exchange I had with one of them.

Her email comes in to my computer as "Brenna," though as you'll see, she hates being called Brenna; her name is Eleanor. I don't know anything about her other than that, and the fact that she had an account on this website and was using it to trade my music. And I know she is a teenager somewhere in the United States, but I figured that out from context, not from anything she wrote.

After some initial distrust, the conversation starts to get interesting, and it turns out that Eleonor, although just a teenager, has a pretty good grasp of how digital abundance can help artists make money:

Let's say Person A has never heard of "The Great Jason Robert Brown." Let's name Person A "Bill." Let's say I find the sheet music to "Stars and the Moon" online and, since I was able to find that music, I was able to perform that song for a talent show. I slate saying "Hi, I'm Eleanor and I will be performing 'Stars and the Moon' from Songs for a New World by Jason Robert Brown." Bill, having never heard of this composer, doesn't know the song or the show. He listens and decides that he really likes the song. Bill goes home that night and downloads the entire Songs for a New World album off iTunes. He also tells his friend Sally about it and they decide to go and see the show together the next time it comes around. Now, if I hadn't been able to get the sheet music for free, I would have probably done a different song. But, since I was able to get it, how much more money was made? This isn't just a fluke thing. It happens. I've heard songs at talent shows or in theatre final exams and decided to see the show because of the one song. And who knows how they got the music? It may have been the same for them and if they hadn't been able to get it free, they would have done something else.

Which is, or course, absolutely spot on.

Mr Brown tries to explain why he disagrees using three stories. The first is about lending a screwdriver to a friend, who then refuses to give it back:

He insists that he has the right to take my screwdriver, build his house, then keep that screwdriver forever so he can build other people's houses with it. This seems unfair to me.

And he's right of course: it *is* unfair, because he has lost his screwdriver, which is an analogue, and therefore rivalrous, object. His sheet music, by contrast, in its digital form, is non-rivalrous: I can have a copy without taking his copy. Yes, there's the issue of whether he loses out, but as Eleonor pointed out, sharing sheet music is a good way to drive sales – it's marketing.

The second story concerns lending another friend a first edition copy of a book by Thornton Wilder; once again, the friend refuses to give it back:

Two months go by; there's a big hole on my bookshelf where "The Bridge of San Luis Rey" is supposed to go. I call my friend, ask him for my book back. He comes over and says, "I love this book, yo. Make me a copy!"

Again, we have the analogue element: this is a rivalrous object, and when the friend has it, poor Mr Brown doesn't have it. But there's another idea here: making copies:

the publishing company won't be able to survive if people just make copies of the book, I say, and the Thornton Wilder estate certainly deserves its share of the income it earns when people buy the book.

Here, the important thing to note is that people *can't* “just make copies of the book”. Yes, they can photocopy it, but that's certainly not the same as a first edition, which is not only rare, but comes with a very particular history. Even if you photocopied the text in order to get to know it, it wouldn't detract from the value of the first edition, which is a rare, rivalrous analogue object. And the Thornton Wilder estate has *already* been paid for the first edition, so there's no reason why they should expect to be paid again if a photocopy is made. And once more, sharing photocopies is likely to drive *more* sales of new editions – which will produce income for the estate.

The third story is even more revealing:

I bought a fantastic new CD by my friend Michael Lowenstern. I then ripped that CD on to my hard drive so I can listen to it on my iPod in my car. Well, that's not FAIR, right? I should have to buy two copies?

No. There is in fact a part of the copyright law that allows exactly this; it's called the doctrine of fair use. If you've purchased or otherwise legally obtained a piece of copyrighted material and you want to make a copy of it for your own use, that's perfectly legal and allowed.

And Mr Brown is absolutely correct – in the *US*. But here in the UK, I have no such right. So what seems self-evidently right to Mr Brown in the US, is in fact wrong in the UK. The reason for that is absolutely central to the whole argument here: the balance between the rights of the creators and the rights of the users is actually arbitrary: different jurisdictions place it at different points, as Mr Brown's example shows.

In fact, Eleonor touched on this in another amazingly perceptive comment:

I assume that because something that good comes from something so insignificantly negative, it's therefore mitigated.

The “something good” that she's talking about includes things like this:

Would it be wrong for me to make a copy of some sheet music and give it to a close friend of mine for an audition? Of course not.

What she is saying is that in weighing up the creator's rights and the user's rights, things have changed in the transition from analogue to digital. Making a copy of a digital object is a minimal infraction of the creator's rights – because nothing is stolen, just created – but brings huge collective benefits for users. And so we need to recalibrate the balance that lies at the heart of copyright to reflect that fact.

As Mr Brown's examples consistently show, he is still thinking along the old, analogue lines with rivalrous goods that can't be shared. We are entering an exciting new digital world where objects are non-rivalrous, and can be copied infinitely. Not surprisingly, the benefits to society that accrue as a result easily outweigh any nominal loss on the creator's part. That's why we need to ignore these calls to our conscience to think about the poor creator – even one as pleasant and sympathetic as Mr Brown – because they omit the other side of the equation: the other six billion people who form the rest of the world.

Follow me @glynmoody on Twitter or identi.ca.

Time for some Digital Economy Act Economy

Here's a hopeful sign:

We're working to create a more open and less intrusive society. We want to restore Britain’s traditions of freedom and fairness, and free our society of unnecessary laws and regulations – both for individuals and businesses.

On Open Enterprise blog.

01 July 2010

Moving Firefox Fourwards

I last interviewed Mozilla Europe's Tristan Nitot a couple of years ago. Yesterday, I met up with him again, and caught up with the latest goings-on in the world of Firefox.

On Open Enterprise blog.

29 June 2010

Botching Bilski

So, the long-awaited US Supreme Court ruling on Bilski vs. Kappos has appeared – and it's a mess. Where many hoped fervently for some clarity to be brought to the ill-defined rules for patenting business methods and software in the US, the court instead was timid in the extreme. It confirmed the lower court's decision that the original Bilski business method was not patentable, but did little to limit business patents in general. And that, by implication, meant that there was no major implication for software patents in the US.

On Open Enterprise blog.

28 June 2010

Has Oracle Been a Disaster for Sun's Open Source?

Companies based around open source are still comparatively young. So it remains an open question what happens to them in the long term. As open source becomes more widely accepted, an obvious growth path for them is to be bought by a bigger, traditional software company. The concern then becomes: how does the underlying open source code fare in those circumstances?

On The H Open.

Microsoft Attacks, By and With the Numbers

There's a nice piece of work by Charles Arthur in The Guardian today that puts a fascinating post from one of Microsoft's top PR people under the microscope. It's all well worth reading, but naturally the following numbers from the memo and Arthur's analysis were of particular interest:

On Open Enterprise blog.

25 June 2010

Let's Make "The Open University" Truly Open

Interesting:

The Open University (OU) is now a certified Microsoft IT Academy adding to its fast-growing suite of IT vendor certifications.

The first course in the OU's Microsoft IT Academy programme TM128 Microsoft server technologies launches in October 2010. The course, purpose-designed by the OU, covers both the fundamentals of computer networks and the specifics of how Windows server technologies can be used practically. Registration is now open for the 30-credit Level 1 module.

Microsoft server technologies will form part of the requirement for both Microsoft Certified System Engineer (MCSE) and Microsoft Certified System Administrator (MCSA) programmes, and forms part of the pathway to MCITP (Microsoft Certified IT Professional) certification. The course can also be counted towards an Open University modular degree.

Naturally, offering such courses about closed-source software is an important part of providing a wide range information and training. And I'm sure there will be similarly courses and qualifications for open source programs.

After all, free software not only already totally dominates areas like supercomputers, the Internet and embedded systems, but is also rapidly gaining market share in key sectors like mobile, so it would obviously make sense to offer plenty of opportunities for students to study and work with the operating system of the future, as well as that of the past.

That's true for all academic establishments offering courses in computing, but in the case of the Open University, even-handedness assumes a particular importance because of the context:

The Open University has appointed a Microsoft boss to be its fifth vice-chancellor.

Martin Bean is currently general manager of product management, marketing and business development for Microsoft's worldwide education products group.

I look forward to hearing about all the exciting new courses and certifications - Red Hat and Ubuntu, maybe? (Via @deburca.)

Follow me @glynmoody on Twitter or identi.ca.

Say "No" to Net Neutrality Nuttiness

I'll admit it: watching the debates about net neutrality in the US, I've always felt rather smug. Not for us sensible UK chappies, I thought, the destruction of what is one of the key properties of the Internet. No daft suggestions that big sites like Google should pay ISPs *again* for the traffic that they send out – that is, in addition to the money they and we fork over for the Internet connections we use. And now we have this:

On Open Enterprise blog.

Those that Live by the DMCA....

This was a pleasant surprise, a *summary* judgment against Viacom in favour of Google:

Today, the court granted our motion for summary judgment in Viacom’s lawsuit with YouTube. This means that the court has decided that YouTube is protected by the safe harbor of the Digital Millennium Copyright Act (DMCA) against claims of copyright infringement. The decision follows established judicial consensus that online services like YouTube are protected when they work cooperatively with copyright holders to help them manage their rights online.

On Open Enterprise blog.

24 June 2010

The Copyright Debate's Missing Element

There is certainly no lack of debate about copyright, and whether it promotes or hinders creativity. But in one important respect, that debate has been badly skewed, since it has largely discussed creativity in terms of pre-digital technologies. And even when digital methods are mentioned, there is precious little independent research to draw upon.

That makes the following particularly significant:

Doctoral research into media education and media literacy at the University of Leicester has highlighted how increased legislative control on use of digital content could stifle future creativity.

The Digital Economy Act 2010 alongside further domestic and global legislation, not least the ongoing ‘Anti-Counterfeiting Trade Agreement (ACTA)’, combines to constitute a very hard line against any form of perceived copyright infringement.

Research implies that these pieces of legislation could stifle the creative opportunities for youngsters with tough regulation on digital media restricting young peoples’ ability to transform copyrighted material for their own personal and, more importantly, educational uses.

The key phrase here is "young people", because they are using content, including copyrighted materials, in quite different ways from traditional creators. As the researcher commented:

“There is a growing risk that creativity in the form of mash-ups, remixes and parodies will be stifled by content producers. With no clear ‘fair use’ policy, even when it comes to educational media production we are in danger of tainting many young people’s initial encounters with the law."

The current approach, embodied in the Digital Economy Act and elsewhere, risks not only stifling the younger generation's creativity, but alienating them completely from any legislation that touches on it. (Via @Coadec.)

Follow me @glynmoody on Twitter or identi.ca.

Can the CodePlex Foundation Free itself from Microsoft?

One of the most fascinating strands in the free software story has been Microsoft's interactions with it. To begin with, the company simply tried to dismiss it, but when it became clear that free software was not going away, and that more companies were switching to it, Microsoft was forced to take it more seriously.

On Open Enterprise blog.

21 June 2010

Copyright Ratchet, Copyright Racket

I can't believe this.

A few days ago I wrote about the extraordinary extra monopolies the German newspaper industry wanted - including an exemption from anti-cartel laws. I also noted:


And make no mistake: if Germany adopts this approach, there will be squeals from publishers around the world demanding "parity", just as there have been with the term of copyright. And so the ratchet will be turned once more.

And what do we find? Why, exactly the same proposals *already* in an FTC "Staff Discussion Draft" [.pdf], which is trying to come up with ways to solve the newspaper industry's "problem" without actually addressing the key issue, which is that people are accessing information online in new ways these days. The document looks at some of the proposed "solutions", which come from the industry, which wants - of course - more monopoly powers:

Internet search engines and online news aggregators often use content from news organizations without paying for that use. Some news organizations have argued that existing intellectual property (IP) law does not sufficiently protect their news stories from free riding by news aggregators. They have suggested that expanded IP rights for news stories would better enable news organizations to obtain revenue from aggregators and search engines.

And:

Advocates argue “the copyright act allows parasitic aggregators to ‘free ride’ on others’ substantial journalistic investments,” by protecting only expression and not the underlying facts, which are often gathered at great expense.

...

They suggest that federal hot news legislation could help address revenue problems facing newspapers by preventing this free-riding.

Moreover, like the German publishers, they also want a Get Out of Jail Free card as far as anti-trust is concerned:

Some in the news industry have suggested that an antitrust exemption is necessary to the survival of news organizations and point to the NPA as precedent for Congress to enact additional protections from the antitrust laws for newspapers. For example, one public comment recommended “the passage of a temporary antitrust exemption to permit media companies to collaborate in the public interest”

Got that? An anti-trust exemption that would allow newspaper to operate as a cartel *in the public interest*. George Orwell would have loved it.

Follow me @glynmoody on Twitter or identi.ca.

Globish, Glanglish and Google Translate

There's a new book out about the rise and use of a globalised English, dubbed "Globish":

Globish is a privatised lingua franca, a commercially driven “world language” unencumbered by the utopian programme of Esperanto. As taught by Nerrière’s enterprise, it combines the coarseness of a distended phrase book and the formulaic optimism of self-help texts – themselves a genre characterised by linguistic paucity, catchphrases and religiose simplicity.

I won't be buying it, mostly because I wrote about the rise and use of a globalised English, dubbed "Glanglish", over 20 years ago. It formed the title essay of a book called, with stunning originality, "Glanglish." This is what I wrote:

English has never existed as a unitary language. For the Angles and the Saxons it was a family of siblings; today it is a vast clan in diaspora. At the head of that clan is the grand old matriarch, British English. Rather quaint now, like all aristocrats left behind by a confusing modern world, she nonetheless has many points of historical interest. Indeed, thousands come to Britain to admire her venerable and famous monuments, preserved in the verbal museums of language schools. Unlike other parts of our national heritage, British English is a treasure we may sell again and again; already the invisible earnings from this industry are substantial, and they are likely to grow as more and more foreigners wish at least to brush their lips across the Grande Dame's ring.

One group unlikely to do so are the natural speakers of the tongue from other continents. Led by the Americans, and followed by the Australians, the New Zealanders and the rest, these republicans are quite content to speak English - provided it is their English. In fact it is likely to be the American's English, since this particular branch of the family tree is proving to be the most feisty in its extension and transformation of the language. Even British English is falling in behind - belatedly, and with a rueful air; but compared to its own slim list of neologisms - mostly upper-class twittish words like 'yomping' - Americanese has proved so fecund in devising new concepts, that its sway over English-thinking minds is assured.

An interesting sub-species of non-English English is provided by one of the dialects of modern India. Indian English is not a truly native tongue, if only for historical reasons; and yet it is no makeshift second language. Reading the 'Hindu Times', it is hard to pin down the provenance of the style: with its orotundities and its 'chaps' it is part London 'Times' circa 1930; with its 'lakhs' it is part pure India.

Whatever it is, it is not to be compared with the halting attempts at English made by millions - perhaps billions soon - whose main interest is communication. Although a disheartening experience to hear for the true-blue Britisher, this mangled, garbled and bungled English is perhaps the most exciting. For from its bleeding hunks and quivering gobbets will be constructed the first and probably last world language. Chinese may have more natural speakers, and Spanish may be gaining both stature and influence, but neither will supersede this mighty mongrel in the making.

English is so universally used as the medium of international linguistic exchange, so embedded in supranational activities like travel - all pilots use English - and, even more crucially, so integral to the world of business, science and technology - money may talk, but it does so in English, and all computer programs are written in that language - that no amount of political or economic change or pressure will prise it loose. Perhaps not even nuclear Armageddon: Latin survived the barbarians. So important is this latest scion of the English stock, that it deserves its own name; and if the bastard brew of Anglicised French is Franglais, what better word to celebrate the marriage of all humanity and English to produce tomorrow's global language than the rich mouthful of 'Glanglish'?

Twenty years on, I now think that the reign of Glanglish/Globish will soon draw to a close, but not because something else will take its place.

The obvious candidate, Chinese, suffers from a huge problem: linguistic degeneracy. By which I mean that a single word - "shi", say - corresponds to over 70 different concepts if you ignore the tones. Even if you can distinguish clearly between the four tones - which few beginners can manage with much consistency - saying the word "shi" will still be much harder to interpret than a similarly-mangled English word, especially for non-native speakers. This makes it pretty useless as a lingua franca, which needs to be both easy to acquire, and easy to understand even by novices.

But something is happening that I hadn't allowed for two decades ago: machine translation. Just look at Google Translate, which I use quite a lot to provide rough translations of interesting stuff that I find on non-English language sites. It's pretty good, getting better - and free. I'm sure that Google is working on producing something similar for spoken language: imagine what a winner Google Voice Translate for Android would be.

So instead of Globish or Glanglish, I think that increasingly people will simply speak their own language, and let Google et al. handle the rest. In a way, that's great, because it will allow people to communicate directly with more or less anyone anywhere. But paradoxically it will probably also lead to people becoming more parochial and less aware of cultural differences around the globe, since few will feel the need to undergo that mind-expanding yet humbling experience of trying to learn a foreign language - not even Glanglish.

Follow me @glynmoody on Twitter or identi.ca.

Something in the Air: the Open Source Way

One of the most vexed questions in climate science is modelling. Much of the time the crucial thing is trying to predict what will happen based on what has happened. But that clearly depends critically on your model. Improving the robustness of that model is an important aspect, and the coding practices employed obviously feed into that.

Here's a slightly old but useful paper [.pdf] that deals with just this topic:

In this paper, we report on a detailed case study of the Climate scientists build large, complex simulations with little or no software engineering training, and do not readily adopt the latest software engineering tools and tech-niques. In this paper, we describe an ethnographic study of the culture and practices of climate scientists at the Met Office Hadley Centre. The study examined how the scientists think about software correctness, how they prioritize requirements, and how they develop a shared understanding of their models. The findings show that climate scientists have developed customized techniques for verification and validation that are tightly integrated into their approach to scientific research. Their software practices share many features of both agile and open source projects, in that they rely on self-organisation of the teams, extensive use of informal communication channels, and developers who are also users and domain experts. These comparisons offer insights into why such practices work.

It would be interesting to know whether the adoption of elements of the open source approach was a conscious decision, or just evolved.

Follow me @glynmoody on Twitter or identi.ca.

20 June 2010

Should Retractions be Behind a Paywall?

"UN climate panel shamed by bogus rainforest claim", so proclaimed an article in The Times earlier this year. It began [.pdf]:

A STARTLING report by the United Nations climate watchdog that global warming might wipe out 40% of the Amazon rainforest was based on an unsubstantiated claim by green campaigners who had little scientific expertise.

Well, not so unsubstantiated, it turns out: The Times has just issued a pretty complete retraction - you can read the whole thing here. But what interests me in this particular case is not the science, but the journalistic aspect.

Because if you went to The Times site to read that retraction, you would, of course, be met by the stony stare of the latter's paywall (assuming you haven't subscribed). Which means that I - and I imagine many people who read the first, inaccurate Times story - can't read the retraction there. Had it not been for the fact that among the many climate change sites (on both sides) that I read, there was this one with a copy, I might never have known.

So here's the thing: if a story has appeared on the open Web, and needs to be retracted, do newspapers like The Times have a duty to post that retraction in the open, or is acceptable to leave behind the paywall?

Answers on the back of yesterday's newspaper...

Follow me @glynmoody on Twitter or identi.ca.

Open Source Scientific Publishing

Since one of the key ideas behind this blog is to explore the application of the open source approach to other fields, I was naturally rather pleased to come across the following:

As a software engineer who works on open source scientific applications and frameworks, when I look at this, I scratch my head and wonder "why don't they just do the equivalent of a code review"? And that's really, where the germ of the idea behind this blog posting started. What if the scientific publishing process were more like an open source project? How would the need for peer-review be balanced with the need to publish? Who should bear the costs? Can a publishing model be created that minimizes bias and allows good ideas to emerge in the face of scientific groupthink?

It's a great question, and the post goes some way to sketching out how that might work in practice. It also dovetails nicely with my earlier post about whether we need traditional peer review anymore. Well worth reading.

Follow me @glynmoody on Twitter or identi.ca.

19 June 2010

Open Source: A Question of Evolution

I met Matt Ridley once, when he was at The Economist, and I wrote a piece for him (I didn't repeat the experience because their fees at the time were extraordinarily ungenerous). He was certainly a pleasant chap in person, but I have rather mixed feelings about his work.

His early book "Genome" is brilliant - a clever promenade through our chromosomes, using the DNA and its features as a framework on which to hang various fascinating facts and figures. His latest work, alas, seems to have gone completely off the rails, as this take-down by George Monbiot indicates.

Despite that, Ridley is still capable of some valuable insights. Here's a section from a recent essay in the Wall Street Journal, called "Humans: Why They Triumphed":

the sophistication of the modern world lies not in individual intelligence or imagination. It is a collective enterprise. Nobody—literally nobody—knows how to make the pencil on my desk (as the economist Leonard Read once pointed out), let alone the computer on which I am writing. The knowledge of how to design, mine, fell, extract, synthesize, combine, manufacture and market these things is fragmented among thousands, sometimes millions of heads. Once human progress started, it was no longer limited by the size of human brains. Intelligence became collective and cumulative.

In the modern world, innovation is a collective enterprise that relies on exchange. As Brian Arthur argues in his book "The Nature of Technology," nearly all technologies are combinations of other technologies and new ideas come from swapping things and thoughts.

This is, of course, a perfect description of the open source methodology: re-using and building on what has gone before, combining the collective intelligence of thousands of hackers around the world through a culture of sharing. Ridley's comment is another indication of why anything else just hasn't made the evolutionary jump.

Follow me @glynmoody on Twitter or identi.ca.

18 June 2010

German Publishers Want More Monopoly Rights

Here's an almost unbelievable piece about what's happening in Germany right now:

It looks as if publishers might really be lobbying for obtaining a new exclusive right conferring the power to monopolise speech e.g. by assigning a right to re-use a particular wording in the headline of a news article anywhere else without the permission of the rights holder. According to the drafts circulating in the internet, permission shall be obtainable exclusively by closing an agreement with a new collecting society which will be founded after the drafts have matured into law. Depending on the particulars, new levies might come up for each and every user of a PC, at least if the computer is used in a company for commercial purposes.

Well, obtaining monopoly protection for sentences and even parts of sentences in a natural language appears to be some kind of very strong meat. This would mean that publishers can control the wording of news messages. This comes crucially close to private control on the dissemination of facts.

But guess what? Someone thinks that German publishers aren't asking for *enough*, as the same article explains:

Mr Castendyk concludes that even if the envisaged auxiliary copyright protection for newspaper language enters into law, the resulting additional revenue streams probably would be insufficient to rescue the publishing companies. He then goes a step further and postulates that publishing companies enjoy a quasi-constitutional guarantee due to their role in the society insofar the state has the obligation to maintain the conditions for their existence forever.

...

Utilising the leveraging effect of this postulated quasi-constitutional guarantee, Castendyk demands to amend cartel law in order to enable a global 'pooling' of all exclusive rights of all newspaper publishers in Germany in order to block any attempt to defect from the paywall cartell by single competitor as discussed above.

This is a beautiful demonstration of a flaw at the heart of copyright: whenever an existing business model based around a monopoly starts to fail, the reflexive approach is to demand yet more monopolies in an attempt to shore it up. And the faster people point out why that won't solve the problem, the faster the demands come for even more oppressive and unreasonable legislation to try to head off those issues.

And make no mistake: if Germany adopts this approach, there will be squeals from publishers around the world demanding "parity", just as there have been with the term of copyright. And so the ratchet will be turned once more.

Follow me @glynmoody on Twitter or identi.ca.

EU's Standard Failure on Standards

Let's be frank: standards are pretty dull; but they are also important as technological gatekeepers. As the shameful OOXML saga showed, gaining the stamp of approval can be so important that some are prepared to adopt practically any means to achieve it; similarly, permitting the use of technologies that companies claim are patented in supposedly open standards can shut out open source implementations completely.

Against that background, the new EU report “Standardization for a competitive and innovative Europe: a vision for 2020” [.pdf] is a real disappointment. For something that purports to be looking forward a decade not even to mention “open source” (as far as I can tell) is an indication of just how old-fashioned and reactionary it is. Of course that omission is all of a piece with this attitude to intellectual monopolies:

The objective is to ensure licences for any essential IPRs contained in standards are provided on fair, reasonable and non-discriminatory conditions (FRAND). In practice, in the large majority of cases, patented technology has been successfully integrated into standards under this approach. On this basis, standards bodies are encouraged to strive for improvements to the FRAND system taking into consideration issues that occur over time. Some fora and consortia, for instance in the area of internet, web, and business process standards development have implemented royalty-free policies (but permitting other FRAND terms) agreed by all members of the respective organisation in order to promote the broad implementation of the standards.

This is clearly heavily biased towards FRAND, and clearly hints that royalty-free regimes are only used by those long-haired, sandal-wearing hippies out on the Well-Weird Web.

But as readers of this blog well know, FRAND is simply incompatible with free software; and any standard that adopts FRAND locks out open source implementations. That this is contemplated in the report is bad enough; that it is not even acknowledged as potential problem is disgrace. (Via No OOXML.)

Follow me @glynmoody on Twitter or identi.ca.

Can You Make Money from Free Stuff?

Well, of course you can – free software is the primary demonstration of that. But that doesn't mean it's trivial to turn free into fee. Here's an interesting move that demonstrates that quite nicely.

On Open Enterprise blog.