27 July 2010

If Oracle Bought Every Open Source Company...

Recently, there was an interesting rumour circulating that Oracle had a war chest of some $70 billion, and was going on an acquisition spree. Despite the huge figure, it had a certain plausibility, because Oracle is a highly successful company with deep pockets and an aggressive management. The rumour was soon denied, but it got me wondering: supposing Oracle decided to spend, if not $70 billion, say $10 billion in an efficient way: how might it do that? And it occurred to me that one rather dramatic use of that money would be to buy up the leading open source companies – all of them.

On Open Enterprise blog.

23 July 2010

Move Commons: Moving Beyond Creative Commons

Talking of commons, I was reading David Bollier's Viral Spiral recently, probably the best book about the rise of the commons as a new force (and I want to emphasise that I am not at all bitter about the fact that he didn't mention Rebel Code once in his description of the early days of free software - nope, not bitter in the slightest.)

I bought a dead tree version, but it's freely available online under a CC licence (sadly not an option when Rebel Code came out...for the simple reason Creative Commons was being formulated at the same time I was writing it.) That's appropriate, since the book is largely about the evolution of the CC licences - and a fascinating tale it is, too.

One particularity of those licences is the way that they try to give users different flavours (in fact there were originally more than there are now - some were later dropped). In many ways the ability to specify exactly which freedoms you are passing on is the most revolutionary - and contentious - part of the CC project.

Against that background, I was therefore delighted to come across Move Commons (MC), "a simple tool for initiatives, collectives and NGOs to declare the core principles they are committed to." It works in almost the same way as the CC licences, allowing you to specify exactly what your "core principles" are:


MC helps these initiatives to declare the core principles they are committed to, allowing others to understand the initiative’s insight with the first glance. The idea is to choose the MC that fits your initiative, and place the generated logo (a combination of four icons) in your webpage.

Once done that, when the next websurfer reaches the initiative’s webpage, it’ll be very easy to understand your initiative’s approach and immediatly answer several questions (Is this a Non-Profit? Are they transparent? Can I use part of their content for my blog? How are they organized internally? Do they expand the Commons with their actions?), before even clicking here and there.

But not only that. By choosing your MC you are connecting with other collectives using MC. Thus, anyone can come to movecommons.org and search for “non-profits that are sharing their contents, and are interested in environmentalism and education“, and if your initative fits that description, it’d appear there. You can thus link with other similar initiatives, regardless of their geographical location. Besides, volunteers could easily find you when they are searching with initiatives like yours… independently of how much you have invested in marketing

The page of options gives an idea of how this works, complete with dinky little logos representing things like profit/non-profit and hierarchical/non-hierarchical.

It's a clever idea, although I'm not sure they've got the key categories worked out yet - for example, it's not clear what the "Reproducible" option really means in terms of content licensing. Still, it's great to see people building on the CC ideas, just as Creative Commons built on the GNU GPL's original breakthrough.

Follow me @glynmoody on Twitter or identi.ca.

An Uncommon Commons in Linz

As its name suggests, a commons is an outgrowth of things held in common, like common land. This has been extended to the digital sphere with great success - notably in the world of free software. But here's an interesting move that takes the commons back to its common-land roots: the Austrian city of Linz is creating an "open commons region":

Die Leitlinien für die Realisierung der »Open-Commons-Region Linz« fordern unter anderem die Einrichtung eines Open-Commons-Beirates, den Aufbau einer Koordinationsstelle, Initiativen für Angebote in den Bereichen Bildung (Open Courseware) und öffentliche Datenbestände, wie zum Beispiel Stadtinformationen oder Stadtkarten (Open Data), Überarbeitung des magistratsweiten Intranets mit Einsatz von Open-Source-Software für das Betriebs-, Redaktions- und Datenbanksystem und Prüfung des Einsatzes von weiteren freien Softwareprodukten in Teilen der Unternehmensgruppe Stadt Linz.


[Via Google Translate: The guidelines for the implementation of the "open-commons Region Linz 'demands include the creation of an open-Commons Advisory Board, the establishment of a coordination center, initiatives for deals in the areas of education (Open Course Ware) and public databases, such as city information or maps (Open Data), revision of the magistratsweiten intranet with the use of open source software for the industrial, editorial and database system and audit of the use of other free software products in parts of the group Linz.]

which ticks most of the open boxes. The expected benefits are also wide ranging:

Die Initiative soll Kosten reduzieren, Abhängigkeiten vermeiden, Eigeninitiative fördern, die Wirtschaft stärken, Wertschöpfung erzeugen, Transparenz herstellen und Rechtssicherheit schaffen.


[The initiative aims to reduce costs, avoid dependency, initiative to promote, strengthen the economy, create value, establishing transparency and legal certainty.]

Sadly, it seems that it won't cure the common cold, despite the affinities of name.

Follow me @glynmoody on Twitter or identi.ca.

Welcome to the Troll Economy

It began, perhaps, with SCO's insane attempt to obtain money from IBM and others for alleged infringements of its code. It proceeded with the music recording industry's increasingly vicious but fruitless threats to ordinary users, expanding more recently into the film business. Now, the Troll Economy has now come to the world of words:


Borrowing a page from patent trolls, the CEO of fledgling Las Vegas-based Righthaven has begun buying out the copyrights to newspaper content for the sole purpose of suing blogs and websites that re-post those articles without permission.

Strangely, perhaps, I think this is a great development. As the world of music shows, once rights-holders start making unreasonable demands, the implicit compact with the public is broken, and people no longer respect a copyright system that does not even attempt to treat them fairly.

The Troll Economy will simply lead to more people rejecting intellectual monopolies altogether, sowing the seeds of its own destruction. Troll away, chaps....

Follow me @glynmoody on Twitter or identi.ca.

Why Free Software is a Matter of Life and Death

As regular readers of this blog will know, free software has an importance that extends way beyond the world of software. But for most people, it's hard to understand why software freedom is really that important. So this new report “Killed by Code: Software Transparency in Implantable Medical Devices” from the Software Freedom Law Center (SFLC) provides a handy opportunity to get the message across:

On Open Enterprise blog.

22 July 2010

Openness: Just What the Doctoral Student Ordered

In 2007 the British Library (BL) and the JISC funded The Google Generation Information Behaviour of the Researcher of the Future research (CIBER, 2008), which focused on how researchers of the future, ‘digital natives’ born after 1993, are likely to access and interact with digital resources in five to ten years’ time. The research reported overall that the information literacy of young people has not improved with wider access to technology.

To complement the findings of the Google Generation research, the BL and the JISC commissioned this three‐year research study Researchers of Tomorrow focusing on the information‐seeking and research behaviour of doctoral students born between 1982 – 1994, dubbed ‘Generation Y’.

There's lots of interesting stuff in the first report, but what really caught my attention was the following:

The principles behind open access publishing and self‐archiving speak to the students’ desire for an all‐embracing, seamlessly accessible research information network in which restrictions on access do not constrain them. Similarly, many of the students favour open source technology applications (e.g. Linux, Mozilla) to support the way they want to work and organise their research, and are critical of the lack of technical support to open source applications in their own institutions.

However, as the report emphasises, students remain somewhat confused about what open access really is. This suggests fertile ground for a little more explanation by open access practitioners - the benefits of doing so could be considerable.

It's also rather ironic that one of those behind the report should be the British Library: as I've noted with sadness before, the BL is one of the leading opponents of openness in the academic world, choosing instead to push DRM and patented-encumbered Microsoft technologies for its holdings. It's probably too much to expect it to read the above sections and to understand that it is going in exactly the wrong direction as far as future researchers - its customers - are concerned...

Follow me @glynmoody on Twitter or identi.ca.

21 July 2010

Could You Adopt a Hacking Business Model?

Once more there is a lot of heated discussion about what constitutes a “real” open source business model – that is, one that remains true to the spirit of open source, and doesn't just use it as a trendy badge to attract customers. But such business models address only a tiny part of running a company – how it generates money. What about the many other aspects of a firm?

That's precisely what The Hacking Business Model seeks to spell out.

On Open Enterprise blog.

China and the Year of the GNU/Linux Desktop

It's an old joke by now that this year will be the year of the GNU/Linux desktop – just like last year, and the year before that. But now there's a new twist: that this year will be the year of the GNU/Linux smartphone – with the difference that it's really happening.

On Open Enterprise blog.

19 July 2010

The Real Open Source Hardware Revolution

I recently wrote about the latest iteration of the Open Source Hardware Definition, which provides a framework for crafting open hardware licences. It's a necessary and important step on the road towards creating a vibrant open source hardware movement. But the kind of open hardware that is commonly being made today – things like the hugely-popular Arduino - is only the beginning.

On Open Enterprise blog.

15 July 2010

Free Access to the Sum of all Human Tarkovsky

One of the many things I love about Wikipedia is the underlying vision, as articulated by Jimmy Wales:

Imagine a world in which every single person on the planet is given free access to the sum of all human knowledge. That's what we're doing.

I love this because it really goes beyond just entries in Wikipedia; it's about making everything that *can* be made universally available - non-rivalrous, digital content, in other words - freely accessible for all.

It's one of the key reasons why I think copyright (and patents) need to go: they are predicated on stopping this happening - of *not* sharing what can be shared so easily.

In terms of how we might go beyond Wikipedia, here's the kind of thing I mean:

Andrei Tarkovsky (1932-1986) firmly positioned himself as the finest Soviet director of the post-War period. But his influence extended well beyond the Soviet Union. The Cahiers du cinéma consistently ranked his films on their top ten annual lists. Ingmar Bergman went so far as to say, “Tarkovsky for me is the greatest [director], the one who invented a new language, true to the nature of film, as it captures life as a reflection, life as a dream.” And Akira Kurosawa acknowledged his influence too, adding, “I love all of Tarkovsky’s films. I love his personality and all his works. Every cut from his films is a marvelous image in itself.”

Shot between 1962 and 1986, Tarkovsky’s seven feature films often grapple with metaphysical and spiritual themes, using a distinctive cinematic style. Long takes, slow pacing and metaphorical imagery – they all figure into the archetypical Tarkovsky film.

Thanks to the Film Annex, you can now watch Tarkovsky’s films online – for free.

Since Tarkovsky is one of my two favourite directors (Mizoguchi, since you ask), you can imagine how my heart leapt when I went to the main site and found not only those seven main films but various shorts and documentaries as well.

Imagine now, *every* film being freely available in this way, and every piece of music - of every genre - every picture, every book, every kind of knowledge, from every time and culture. Just imagine the possibilities for enriching people's lives (once they have a capabilities of accessing it, or course - a non-trivial pre-requisite.) Imagine the impact that would have on them, their families, their nations, and on the world. Now tell me why we should let copyright stop that happening.

Update: oh, what a surprise: some of the films have *already* disappeared because of "copyright issues". Because copyright is so much more important than letting everyone enjoy an artist's work. (Via Open Education News.)

Follow me @glynmoody on Twitter or identi.ca.

Realising the Dream of Open Source Hardware

The growing success of open source software has naturally spurred on others to apply its lessons elsewhere. Open content is perhaps the most famous translation, notably through the widely-used Creative Commons licences. But one of the most challenging domains to come up with something equivalent to the Open Source Definition (OSD) is hardware – not surprisingly, perhaps, since hardware is analogue, not digital, and hence very different in nature.

On Open Enterprise blog.

14 July 2010

Should the Music Industry Pay ISPs for Piracy?

In the wake of its “success” in pushing through Digital Economy Act, the British music industry is hoping to move on to the next stage: using it as a lever to get more money out of the system (even though the music industry is currently thriving).

The UK royalties collector PRS For Music has just published a rough blueprint [.pdf] for how this might be done, entitled: “Moving Digital Britain Forward, without leaving Creative Britain behind”. It's a fascinating document, and merits close reading.

As the title suggests, there are essentially just two players in this analysis: the music industry, and the ISPs (the public are obviously irrelevant here). The ISPs are no longer lowly bit-mules, mindlessly obeying Net neutrality by conveying digital files hither and thither without a thought as to their content, but are to be regarded as “Next Generation Broadcasters”:

operators of networks that connect supply with demand in a market for media.

That's important, of course, because it reframes the debate about file-sharing in terms of old technology: radio and TV. It permits the argument to be made that such “broadcasters” have to pay for the privilege of broadcasting all that content – just like the radio and TV broadcasters do.

The paper makes a very good point about the increased capacity networks that are being built:

One of the few studies to be published comes from MoneySupermarket, who found that more than a third of consumers surveyed believe the advent of high-speed, next-generation broadband services would encourage greater piracy and make it easier to illegally download content. The report concluded that: ‘Illegal downloading is already a big problem for the likes of the music and film industries ... with superfast broadband packages set to become commonplace, the problem seems likely to get worse.’

I think that's true, but the analysis dismisses too easily the main reason for this:

Perhaps, like iTunes, these legal venues could increase the range of content on offer, but this increase comes at a high cost when already at a significant disadvantage to “free”.

That's a vicious circle: music companies won't offer more content to compete with free, unauthorised sites because it would cost too much, which means that there won't be so much authorised content as unauthorised, which means that people will continue to be forced to opt for unauthorised downloads, which music companies aren't willing to compete with.

The report even mentions iTunes, which backs up this view: for once iTunes made available most of the content previously only found on unauthorised sites, it started raking the money in. And yet the report chooses to ignore this rare data point, and stick with its circularity – the reason being, it has a Cunning Plan. The ISPs – sorry, Next Generation Broadcasters – must pay:

If changes in the scale of unlicensed media can be measured, we can put a price on this spillover to bridge the value gap. Simply stated, at some date a price would be placed on the indexed measure of unlicensed media on ISP networks. If at a later date the measure of infringement increases, the value transferred (from ISP to rightsholders) would increase accordingly.

Conversely, were the measure of infringement to decrease, the amount transferred would decrease accordingly. The options for pricing such spillovers should be the subject of further research.

They should indeed: I think this is a splendid idea – if we could make just one tiny tweak.

For this to be fair, we must of course make sure that we capture all the effects of unauthorised file sharing so that its true economic effect is measured. That is, we shouldn't be measuring anything so crude and vague as the flow of allegedly unauthorised copyright materials across a network. After all, it's impossible to say whether some of that flow might be permissible uses, and then there's the question of whether people would have bought the equivalent content etc.

Instead, what needs to be ascertained is the knock-on economic effects of that file-sharing in the *real world*. And of course, one very important aspect that has to be included in that is the fact that those who share files buy more, not less, music. As Mike Masnick explains through a splendid series of links:

Study after study after study after study after study after study has shown the exact opposite -- noting that people who file share tend to be bigger music fans, and are more likely to spend on music.

So I think we should try out this report's suggestion that ISPs should pay for the consequences of their users' actions – provided the recorded industry pays the ISPs if it should turn out (as those six reports linked to by Masnick might suggest) that file sharing actually *increases* the sales of recorded music. What could be fairer than that?

Follow me @glynmoody on Twitter or identi.ca.

Richard Stallman on .NET, Mono and DotGNU

Last week I published a short correspondence I had with Richard Stallman on the subject of the GNU GPL and copyright. As I mentioned, that was from a couple of years ago, but I thought it might be worth posting now given the lively interest in the issues it raises.

On Open Enterprise.

12 July 2010

Time for Free Software to Square up to Foursquare

I've never been one to follow the latest digital fashions immediately. I didn't start blogging until November 2005, and I only joined Twitter in January 2009, and identi.ca in May 2009. And so it is that I haven't joined Foursquare, or any of the other location-based social networks. That's partly because I like to wait, to see whether it's just a passing fad or something more enduring, and partly because I frankly haven't seen the point. Maybe it's about this:

On Open Enterprise blog.

Why Android's Victory is Inevitable

Arguably the most important development in the world of open source in the last year or two has been the rise and rise of Google's Linux-based Android operating system. It's true that the mobiles out there employing it are not 100% free, but they are considerably more free than the main alternatives. More importantly, they are turning Linux into a global, mass-market platform in a way never before seen.

On Open Enterprise blog.

11 July 2010

The Peculiar World of Private Label Rights

Here's a variety of "sharing" I'd not come across before: private label rights. This is what Wikipedia has to say on the subject:

Private label rights is a concept similar to reselling, but the merchant is permitted to modify the product to fit his or her needs. Typical PLR products are articles, reports, eBooks, and autoresponders. This kind of content is used for the purpose of allowing multiple buyers to invest in the content with free rein to alter and use it by claiming authorship of it. It is typically used in online affiliate marketing systems.

As far as I can make out, this is a kind of a cross between spamblog content and pyramid selling.

One question that comes to mind is how much CC-licensed stuff ends up being passed around in this way? Of course, if the licence allows it, that's fine, but I wondered whether anyone had any experience of their content being "repackaged" in this way?

Follow me @glynmoody on Twitter or identi.ca.

09 July 2010

South Korea: Super Fast, and Finally Free

Imagine a country that has one of the best Internet infrastructures in the world, and yet its government effectively forbids the use of GNU/Linux through a requirement that everyone employ a decade-old Windows-only technology for many key online transactions. That country is South Korea, where 1 Gbits/second Internet connections are planned for 2012; and that Windows-only technology is ActiveX.

On The H Open.

Could Free Software Exist Without Copyright?

A couple of days ago, I was writing about how Richard Stallman's GNU GPL uses copyright as a way of ensuring that licensees share code that they distribute – because if they don't, they are breaching the GPL, and therefore lose their protection against claims of copyright infringement.

On Open Enterprise blog.

08 July 2010

Free Software Coder Bullied over *Algorithm*

As long-suffering readers of this blog will know, one of the many reasons I am against software patents is that software consists of algorithms, and algorithms are just maths, so a software patent is a patent on knowledge - the purest knowledge there is (a mathematician writes).

Sometimes defenders of software patents deny that software is just algorithms (don't ask me how, but some do). So I was particularly interested to read about this poor hacker being contacted over - you guessed it - algorithms, pure and simple:

Landmark Digital Services owns the patents that cover the algorithm used as the basis for your recently posted “Creating Shazam In Java”. While it is not Landmark’s intention to alienate those in the Open Source and Music Information Retrieval community, Landmark must request that you do not ship, deploy or post the code presented in your post. Landmark also requests that in the future you do not ship, deploy or post any portions or versions of this code in its current state or in any modified state.

As you can see, there is no way of disguising the fact that this claims to be a patent on an *algorithm* - that is, on maths, which is knowledge and therefore unpatentable.

But it gets worse. As the poor chap points out:

I've written some code (100% my own) and implemented my own methods for matching music. There are some key differences with the algorithm Shazam uses.

That is, he didn't copy the code, and it's not even the same approach.

But wait, there's more.

As he notes:

Why does Landmark Digital Services think they hold a patent for the concepts used in my code? Even if my code works pretty different from the Shazam code (from which the patents came).

What they describe in the patent is a system which:
1. Make a series of fingerprints of a media file and/or media sample
(such as audio, but could also be text, video, multimedia, etc)
2. Have a database/hashtable of fingerprints as lookup
3. Compare the set of hashtable hits using their moment in time it happened

This is very vague, basically the only innovative idea is matching the found fingerprints linearly in time. Because the first two steps describe how a hashtable works and creating a hash works. These concepts are not new nor innovative.

Moreover:

I've also had contact with other people who have implemented this kind of algorithms. Most notible is Dan Ellis. His implementation can be found here: http://labrosa.ee.columbia.edu/~dpwe/resources/matlab/fingerprint/

He hasn't been contacted (yet), but he isn't planning on taking his MatLab implementation down anyway and has agreed for me to place the link here. This raises another interesting question, why are they targetting me, somebody who hasn't even published the code yet, and not the already published implementation of Dan?!

And if they think its illegal to explain the algorithm, why aren't they going after this guy? http://laplacian.wordpress.com/2009/01/10/how-shazam-works/

This is where I got the idea to implement the algorithm and it is mentioned in my own first post about the Java Shazam.

So, moving to that last site, we find a detailed analysis of the algorithm - which is all pretty obvious. How did he do that?

So I was curious how it worked, and luckily there is a paper [.pdf] written by one of the developers explaining just that. Of course they leave out some of the details, but the basic idea is exactly what you would expect: it relies on fingerprinting music based on the spectrogram.

In other words, the description of the algorithm by the company's programmers shows that it "is exactly what you would expect".

At every level, then, this is an obvious, algorithmic, mathematical approach. And yet someone in Holland - a country that doesn't recognise software patents at all - finds himself under pressure in this manner for some code he wrote independently implementing that general, algorithmic mathematical idea.

Now explain to me how patents promote innovation, please...

Update: Re-reading the post I realise that things are even more ridiculous. Here's what the company wants:

we would like you to refrain from releasing the code at all and to remove the blogpost explaining the algorithm.

Now, you recall that the algorithm is the thing that the company claims to have a patent on. The original idea behind a patent was that in return for its grant, the inventor would *reveal* all the details of his or her invention so that others could use it once the patent had expired, as a quid pro quo. So if the company claims a patent on its invention, it must *by definition* reveal the algorithm.

Against that background, this demand to remove an explanation of the algorithm is simply absurd, and contradicts the very nature of a patent - it's like asking the USPTO not to reveal the patents it grants.

Follow me @glynmoody on Twitter or identi.ca.

Act Now on ACTA (No, Really)

Today is the last day for your MEPs to sign Written declaration ACTA 12/2010 (full background available from La Quadrature du Net.). To be precise, we have until 11am UK time to convince them to add their name to the list.

On Open Enterprise blog.

07 July 2010

Are the Creative Commons Licences Valid?

As readers of this blog will doubtless know, Richard Stallman's great stroke of genius at the founding of the GNU project was to use copyright when crafting the GNU GPL licence, but in such a way that it undermined the restrictive monopoly copyright usually imposes on users, and required people to share instead.

On Open Enterprise blog.

Exploring Entitlement Economics

Bradley M. Kuhn has a thought-provoking post with the title "Proprietary Software Licensing Produces No New Value In Society". Here's a key section:

I've often been paid for programming, but I've been paid directly for the hours I spent programming. I never even considered it reasonable to be paid again for programming I did in the past. How is that fair, just, or quite frankly, even necessary? If I get a job building a house, I can't get paid every day someone uses that house. Indeed, even if I built the house, I shouldn't get a royalty paid every time the house is resold to a new owner. Why should software work any differently? Indeed, there's even an argument that software, since it's so much more trivial to copy than a house, should be available gratis to everyone once it's written the first time.

He then goes on to point out:

Thus, this line of reasoning gives me yet another reason to oppose proprietary software: proprietary licensing is simply a valueless transaction. It creates a burden on society and gives no benefit, other than a financial one to those granted the monopoly over that particular software program. Unfortunately, there nevertheless remain many who want that level of control, because one fact cannot be denied: the profits are larger.

For example, Mårten Mikos recently argued in favor of these sorts of large profits. He claims that to "benefit massively from Open Source" (i.e., to get really rich), business models like “Open Core” are necessary. Mårten's argument, and indeed most pro-Open-Core arguments, rely on this following fundamental assumption: for FLOSS to be legitimate, it must allow for the same level of profits as proprietary software. This assumption, in my view, is faulty. It's always true that you can make bigger profits by ignoring morality. Factories can easily make more money by completely ignoring environmental issues; strip mining is always very profitable, after all. However, as a society, we've decided that the environment is worth protecting, so we have rules that do limit profit maximization because a more important goal is served.

This analysis is cognate with my recent post about the absence of billion-dollar turnover open source companies: the fact is, as a pure-play free software outfit, you just can't make so much money as you can with proprietary software, because you generally have to sell scarce things like people's time, and that doesn't scale.

But the implications of this point are much wider, I think.

As Kuhn emphasies:

I'll just never be fully comfortable with the idea that workers should get money for work they already did. Work is only valuable if it produces something new that didn't exist in the world before the work started, or solves a problem that had yet to be solved. Proprietary licensing and financial bets on market derivatives have something troubling in common: they can make a profit for someone without requiring that someone to do any new work. Any time a business moves away from actually producing something new of value for a real human being, I'll always question whether the business remains legitimate.

This idea of getting money for work already done is precisely how copyright is regarded these days. It's not enough for a creator to be paid once for his or her work: they want to be paid every time it is performed or copies made of performances.

So ingrained is this idea that anyone suggesting the contrary - like that doughty young Eleanor - is regarded as some kind of alien from another planet, and is mocked by those whose livelihoods depend upon this kind of entitlement economics.

But just as open source has cut down the fat profits of proprietary software companies, so eventually will the exorbitant profits of the media industry be cut back to reasonable levels based on how much work people do - because, as Kuhn notes, there really is no justification for anything more.

Follow me @glynmoody on Twitter or identi.ca.

06 July 2010

Open Source: It's all LinkedIn

As I noted in my post “Why No Billion-Dollar Open Source Companies?", one of the reasons there are no large pure-play open source companies is that their business model is based on giving back to customers most of the costs the latter have traditionally paid to software houses.

On Open Enterprise blog.

05 July 2010

Jim Whitehurst is CEO and Chief Plumber at Red Hat

Jim Whitehurst, president and CEO of Red Hat, the oldest and by far the most successful company whose business is based purely around open source, makes no bones about it: “Selling free software is hard,” he says. In fact, he goes further: “Open source is not a business model; it's a way to develop software.”

On CIO.co.uk.

WWW: World Wide Wikipedia

I love Wikipedia. I love using it, frequently spending many a spare minute (that I don't actually have) simply wandering from one entry to another, learning things I never knew I never knew. I love it, too, as an amazing example of why sharing and openness work. For those who aren't programmers, and who therefore don't grok the evident rightness of the open source methodology, Wikipedia is a great way of explaining how it's done and why it's so good.

On Open Enterprise blog.

Welcome to Open Source Law

Since, as Larry Lessig famously pointed out, "code is law" (and vice versa), it's natural to try to apply open source methodologies in the legal world. Indeed, a site called Openlaw existed ten years ago:


Openlaw is an experiment in crafting legal argument in an open forum. With your assistance, we will develop arguments, draft pleadings, and edit briefs in public, online. Non-lawyers and lawyers alike are invited to join the process by adding thoughts to the "brainstorm" outlines, drafting and commenting on drafts in progress, and suggesting reference sources.

Building on the model of open source software, we are working from the hypothesis that an open development process best harnesses the distributed resources of the Internet community. By using the Internet, we hope to enable the public interest to speak as loudly as the interests of corporations. Openlaw is therefore a large project built through the coordinated effort of many small (and not so small) contributions.

Despite this long pedigree, open source law never really took off - until now. As this important post points out:

The case of British Chiropractic Association v Simon Singh was perhaps the first major English case to be litigated under the full glare of the internet. This did not just mean that people merely followed the case’s progress on blogs and messageboards: the role of the internet was more far-reaching than this


Crucially:

The technical evidence of a claimant in a controversial case had simply been demolished - and seen to be demolished - but not by the conventional means of ­contrary expert evidence and expensive forensic cross-examination, but by specialist bloggers. And there is no reason why such specialist bloggers would not do the same in a similar case.

The key thing is that those bloggers need to be engaged by the case - this isn't going to happen for run-of-the-mill litigation. But that's OK: it means that when something important is at stake - as in the Singh case - and their help is most needed, they *will* be engaged, and that wonderful digital kraken will stir again.

Follow me @glynmoody on Twitter or identi.ca.

02 July 2010

An (Analogue) Artist's Reply to Just Criticism

There's a new meme in town these days: “rights of the artists”. The copyright industries have worked out that cries for more copyright and more money don't go down too well when they come from fat-cat monopolists sitting in their plush offices, and so have now redefined their fight in terms of struggling artists (who rarely get to see much benefit from constantly extended copyright).

Here's a nice example courtesy of the Copyright Alliance – an organisation that very much pushes that line:

Songwriter, Jason Robert Brown, recently posted on his blog a story about his experience dealing with copyright infringement. Knowing for a long time that many websites exist for the sole purpose of “trading” sheet music, Jason decided to log on himself and politely ask many of the users to stop “trading” his work. While many quickly wrote back apologizing and then removing his work, one girl in particular gave Jason a hard time.

First of all, I must commend Mr Brown for the way he has gone about addressing this issue. As he explains on his blog, this is the message he sent to those who were offering sheet music of his compositions on a site:

Hey there! Can I get you to stop trading my stuff? It's totally not cool with me. Write me if you have any questions, I'm happy to talk to you about this. jason@jasonrobertbrown.com

Thanks,
J.

Now, that seems to me an eminently calm and polite request. Given that he obviously feels strongly about this matter, Mr Brown deserves kudos for that. As he explains:

The broad majority of people I wrote to actually wrote back fairly quickly, apologized sincerely, and then marked their music "Not for trade."

However, he adds:

there were some people who fought back. And I'm now going to reproduce, entirely unexpurgated, the exchange I had with one of them.

Her email comes in to my computer as "Brenna," though as you'll see, she hates being called Brenna; her name is Eleanor. I don't know anything about her other than that, and the fact that she had an account on this website and was using it to trade my music. And I know she is a teenager somewhere in the United States, but I figured that out from context, not from anything she wrote.

After some initial distrust, the conversation starts to get interesting, and it turns out that Eleonor, although just a teenager, has a pretty good grasp of how digital abundance can help artists make money:

Let's say Person A has never heard of "The Great Jason Robert Brown." Let's name Person A "Bill." Let's say I find the sheet music to "Stars and the Moon" online and, since I was able to find that music, I was able to perform that song for a talent show. I slate saying "Hi, I'm Eleanor and I will be performing 'Stars and the Moon' from Songs for a New World by Jason Robert Brown." Bill, having never heard of this composer, doesn't know the song or the show. He listens and decides that he really likes the song. Bill goes home that night and downloads the entire Songs for a New World album off iTunes. He also tells his friend Sally about it and they decide to go and see the show together the next time it comes around. Now, if I hadn't been able to get the sheet music for free, I would have probably done a different song. But, since I was able to get it, how much more money was made? This isn't just a fluke thing. It happens. I've heard songs at talent shows or in theatre final exams and decided to see the show because of the one song. And who knows how they got the music? It may have been the same for them and if they hadn't been able to get it free, they would have done something else.

Which is, or course, absolutely spot on.

Mr Brown tries to explain why he disagrees using three stories. The first is about lending a screwdriver to a friend, who then refuses to give it back:

He insists that he has the right to take my screwdriver, build his house, then keep that screwdriver forever so he can build other people's houses with it. This seems unfair to me.

And he's right of course: it *is* unfair, because he has lost his screwdriver, which is an analogue, and therefore rivalrous, object. His sheet music, by contrast, in its digital form, is non-rivalrous: I can have a copy without taking his copy. Yes, there's the issue of whether he loses out, but as Eleonor pointed out, sharing sheet music is a good way to drive sales – it's marketing.

The second story concerns lending another friend a first edition copy of a book by Thornton Wilder; once again, the friend refuses to give it back:

Two months go by; there's a big hole on my bookshelf where "The Bridge of San Luis Rey" is supposed to go. I call my friend, ask him for my book back. He comes over and says, "I love this book, yo. Make me a copy!"

Again, we have the analogue element: this is a rivalrous object, and when the friend has it, poor Mr Brown doesn't have it. But there's another idea here: making copies:

the publishing company won't be able to survive if people just make copies of the book, I say, and the Thornton Wilder estate certainly deserves its share of the income it earns when people buy the book.

Here, the important thing to note is that people *can't* “just make copies of the book”. Yes, they can photocopy it, but that's certainly not the same as a first edition, which is not only rare, but comes with a very particular history. Even if you photocopied the text in order to get to know it, it wouldn't detract from the value of the first edition, which is a rare, rivalrous analogue object. And the Thornton Wilder estate has *already* been paid for the first edition, so there's no reason why they should expect to be paid again if a photocopy is made. And once more, sharing photocopies is likely to drive *more* sales of new editions – which will produce income for the estate.

The third story is even more revealing:

I bought a fantastic new CD by my friend Michael Lowenstern. I then ripped that CD on to my hard drive so I can listen to it on my iPod in my car. Well, that's not FAIR, right? I should have to buy two copies?

No. There is in fact a part of the copyright law that allows exactly this; it's called the doctrine of fair use. If you've purchased or otherwise legally obtained a piece of copyrighted material and you want to make a copy of it for your own use, that's perfectly legal and allowed.

And Mr Brown is absolutely correct – in the *US*. But here in the UK, I have no such right. So what seems self-evidently right to Mr Brown in the US, is in fact wrong in the UK. The reason for that is absolutely central to the whole argument here: the balance between the rights of the creators and the rights of the users is actually arbitrary: different jurisdictions place it at different points, as Mr Brown's example shows.

In fact, Eleonor touched on this in another amazingly perceptive comment:

I assume that because something that good comes from something so insignificantly negative, it's therefore mitigated.

The “something good” that she's talking about includes things like this:

Would it be wrong for me to make a copy of some sheet music and give it to a close friend of mine for an audition? Of course not.

What she is saying is that in weighing up the creator's rights and the user's rights, things have changed in the transition from analogue to digital. Making a copy of a digital object is a minimal infraction of the creator's rights – because nothing is stolen, just created – but brings huge collective benefits for users. And so we need to recalibrate the balance that lies at the heart of copyright to reflect that fact.

As Mr Brown's examples consistently show, he is still thinking along the old, analogue lines with rivalrous goods that can't be shared. We are entering an exciting new digital world where objects are non-rivalrous, and can be copied infinitely. Not surprisingly, the benefits to society that accrue as a result easily outweigh any nominal loss on the creator's part. That's why we need to ignore these calls to our conscience to think about the poor creator – even one as pleasant and sympathetic as Mr Brown – because they omit the other side of the equation: the other six billion people who form the rest of the world.

Follow me @glynmoody on Twitter or identi.ca.

Time for some Digital Economy Act Economy

Here's a hopeful sign:

We're working to create a more open and less intrusive society. We want to restore Britain’s traditions of freedom and fairness, and free our society of unnecessary laws and regulations – both for individuals and businesses.

On Open Enterprise blog.

01 July 2010

Moving Firefox Fourwards

I last interviewed Mozilla Europe's Tristan Nitot a couple of years ago. Yesterday, I met up with him again, and caught up with the latest goings-on in the world of Firefox.

On Open Enterprise blog.

29 June 2010

Botching Bilski

So, the long-awaited US Supreme Court ruling on Bilski vs. Kappos has appeared – and it's a mess. Where many hoped fervently for some clarity to be brought to the ill-defined rules for patenting business methods and software in the US, the court instead was timid in the extreme. It confirmed the lower court's decision that the original Bilski business method was not patentable, but did little to limit business patents in general. And that, by implication, meant that there was no major implication for software patents in the US.

On Open Enterprise blog.

28 June 2010

Has Oracle Been a Disaster for Sun's Open Source?

Companies based around open source are still comparatively young. So it remains an open question what happens to them in the long term. As open source becomes more widely accepted, an obvious growth path for them is to be bought by a bigger, traditional software company. The concern then becomes: how does the underlying open source code fare in those circumstances?

On The H Open.

Microsoft Attacks, By and With the Numbers

There's a nice piece of work by Charles Arthur in The Guardian today that puts a fascinating post from one of Microsoft's top PR people under the microscope. It's all well worth reading, but naturally the following numbers from the memo and Arthur's analysis were of particular interest:

On Open Enterprise blog.

25 June 2010

Let's Make "The Open University" Truly Open

Interesting:

The Open University (OU) is now a certified Microsoft IT Academy adding to its fast-growing suite of IT vendor certifications.

The first course in the OU's Microsoft IT Academy programme TM128 Microsoft server technologies launches in October 2010. The course, purpose-designed by the OU, covers both the fundamentals of computer networks and the specifics of how Windows server technologies can be used practically. Registration is now open for the 30-credit Level 1 module.

Microsoft server technologies will form part of the requirement for both Microsoft Certified System Engineer (MCSE) and Microsoft Certified System Administrator (MCSA) programmes, and forms part of the pathway to MCITP (Microsoft Certified IT Professional) certification. The course can also be counted towards an Open University modular degree.

Naturally, offering such courses about closed-source software is an important part of providing a wide range information and training. And I'm sure there will be similarly courses and qualifications for open source programs.

After all, free software not only already totally dominates areas like supercomputers, the Internet and embedded systems, but is also rapidly gaining market share in key sectors like mobile, so it would obviously make sense to offer plenty of opportunities for students to study and work with the operating system of the future, as well as that of the past.

That's true for all academic establishments offering courses in computing, but in the case of the Open University, even-handedness assumes a particular importance because of the context:

The Open University has appointed a Microsoft boss to be its fifth vice-chancellor.

Martin Bean is currently general manager of product management, marketing and business development for Microsoft's worldwide education products group.

I look forward to hearing about all the exciting new courses and certifications - Red Hat and Ubuntu, maybe? (Via @deburca.)

Follow me @glynmoody on Twitter or identi.ca.

Say "No" to Net Neutrality Nuttiness

I'll admit it: watching the debates about net neutrality in the US, I've always felt rather smug. Not for us sensible UK chappies, I thought, the destruction of what is one of the key properties of the Internet. No daft suggestions that big sites like Google should pay ISPs *again* for the traffic that they send out – that is, in addition to the money they and we fork over for the Internet connections we use. And now we have this:

On Open Enterprise blog.

Those that Live by the DMCA....

This was a pleasant surprise, a *summary* judgment against Viacom in favour of Google:

Today, the court granted our motion for summary judgment in Viacom’s lawsuit with YouTube. This means that the court has decided that YouTube is protected by the safe harbor of the Digital Millennium Copyright Act (DMCA) against claims of copyright infringement. The decision follows established judicial consensus that online services like YouTube are protected when they work cooperatively with copyright holders to help them manage their rights online.

On Open Enterprise blog.

24 June 2010

The Copyright Debate's Missing Element

There is certainly no lack of debate about copyright, and whether it promotes or hinders creativity. But in one important respect, that debate has been badly skewed, since it has largely discussed creativity in terms of pre-digital technologies. And even when digital methods are mentioned, there is precious little independent research to draw upon.

That makes the following particularly significant:

Doctoral research into media education and media literacy at the University of Leicester has highlighted how increased legislative control on use of digital content could stifle future creativity.

The Digital Economy Act 2010 alongside further domestic and global legislation, not least the ongoing ‘Anti-Counterfeiting Trade Agreement (ACTA)’, combines to constitute a very hard line against any form of perceived copyright infringement.

Research implies that these pieces of legislation could stifle the creative opportunities for youngsters with tough regulation on digital media restricting young peoples’ ability to transform copyrighted material for their own personal and, more importantly, educational uses.

The key phrase here is "young people", because they are using content, including copyrighted materials, in quite different ways from traditional creators. As the researcher commented:

“There is a growing risk that creativity in the form of mash-ups, remixes and parodies will be stifled by content producers. With no clear ‘fair use’ policy, even when it comes to educational media production we are in danger of tainting many young people’s initial encounters with the law."

The current approach, embodied in the Digital Economy Act and elsewhere, risks not only stifling the younger generation's creativity, but alienating them completely from any legislation that touches on it. (Via @Coadec.)

Follow me @glynmoody on Twitter or identi.ca.

Can the CodePlex Foundation Free itself from Microsoft?

One of the most fascinating strands in the free software story has been Microsoft's interactions with it. To begin with, the company simply tried to dismiss it, but when it became clear that free software was not going away, and that more companies were switching to it, Microsoft was forced to take it more seriously.

On Open Enterprise blog.

21 June 2010

Copyright Ratchet, Copyright Racket

I can't believe this.

A few days ago I wrote about the extraordinary extra monopolies the German newspaper industry wanted - including an exemption from anti-cartel laws. I also noted:


And make no mistake: if Germany adopts this approach, there will be squeals from publishers around the world demanding "parity", just as there have been with the term of copyright. And so the ratchet will be turned once more.

And what do we find? Why, exactly the same proposals *already* in an FTC "Staff Discussion Draft" [.pdf], which is trying to come up with ways to solve the newspaper industry's "problem" without actually addressing the key issue, which is that people are accessing information online in new ways these days. The document looks at some of the proposed "solutions", which come from the industry, which wants - of course - more monopoly powers:

Internet search engines and online news aggregators often use content from news organizations without paying for that use. Some news organizations have argued that existing intellectual property (IP) law does not sufficiently protect their news stories from free riding by news aggregators. They have suggested that expanded IP rights for news stories would better enable news organizations to obtain revenue from aggregators and search engines.

And:

Advocates argue “the copyright act allows parasitic aggregators to ‘free ride’ on others’ substantial journalistic investments,” by protecting only expression and not the underlying facts, which are often gathered at great expense.

...

They suggest that federal hot news legislation could help address revenue problems facing newspapers by preventing this free-riding.

Moreover, like the German publishers, they also want a Get Out of Jail Free card as far as anti-trust is concerned:

Some in the news industry have suggested that an antitrust exemption is necessary to the survival of news organizations and point to the NPA as precedent for Congress to enact additional protections from the antitrust laws for newspapers. For example, one public comment recommended “the passage of a temporary antitrust exemption to permit media companies to collaborate in the public interest”

Got that? An anti-trust exemption that would allow newspaper to operate as a cartel *in the public interest*. George Orwell would have loved it.

Follow me @glynmoody on Twitter or identi.ca.

Globish, Glanglish and Google Translate

There's a new book out about the rise and use of a globalised English, dubbed "Globish":

Globish is a privatised lingua franca, a commercially driven “world language” unencumbered by the utopian programme of Esperanto. As taught by Nerrière’s enterprise, it combines the coarseness of a distended phrase book and the formulaic optimism of self-help texts – themselves a genre characterised by linguistic paucity, catchphrases and religiose simplicity.

I won't be buying it, mostly because I wrote about the rise and use of a globalised English, dubbed "Glanglish", over 20 years ago. It formed the title essay of a book called, with stunning originality, "Glanglish." This is what I wrote:

English has never existed as a unitary language. For the Angles and the Saxons it was a family of siblings; today it is a vast clan in diaspora. At the head of that clan is the grand old matriarch, British English. Rather quaint now, like all aristocrats left behind by a confusing modern world, she nonetheless has many points of historical interest. Indeed, thousands come to Britain to admire her venerable and famous monuments, preserved in the verbal museums of language schools. Unlike other parts of our national heritage, British English is a treasure we may sell again and again; already the invisible earnings from this industry are substantial, and they are likely to grow as more and more foreigners wish at least to brush their lips across the Grande Dame's ring.

One group unlikely to do so are the natural speakers of the tongue from other continents. Led by the Americans, and followed by the Australians, the New Zealanders and the rest, these republicans are quite content to speak English - provided it is their English. In fact it is likely to be the American's English, since this particular branch of the family tree is proving to be the most feisty in its extension and transformation of the language. Even British English is falling in behind - belatedly, and with a rueful air; but compared to its own slim list of neologisms - mostly upper-class twittish words like 'yomping' - Americanese has proved so fecund in devising new concepts, that its sway over English-thinking minds is assured.

An interesting sub-species of non-English English is provided by one of the dialects of modern India. Indian English is not a truly native tongue, if only for historical reasons; and yet it is no makeshift second language. Reading the 'Hindu Times', it is hard to pin down the provenance of the style: with its orotundities and its 'chaps' it is part London 'Times' circa 1930; with its 'lakhs' it is part pure India.

Whatever it is, it is not to be compared with the halting attempts at English made by millions - perhaps billions soon - whose main interest is communication. Although a disheartening experience to hear for the true-blue Britisher, this mangled, garbled and bungled English is perhaps the most exciting. For from its bleeding hunks and quivering gobbets will be constructed the first and probably last world language. Chinese may have more natural speakers, and Spanish may be gaining both stature and influence, but neither will supersede this mighty mongrel in the making.

English is so universally used as the medium of international linguistic exchange, so embedded in supranational activities like travel - all pilots use English - and, even more crucially, so integral to the world of business, science and technology - money may talk, but it does so in English, and all computer programs are written in that language - that no amount of political or economic change or pressure will prise it loose. Perhaps not even nuclear Armageddon: Latin survived the barbarians. So important is this latest scion of the English stock, that it deserves its own name; and if the bastard brew of Anglicised French is Franglais, what better word to celebrate the marriage of all humanity and English to produce tomorrow's global language than the rich mouthful of 'Glanglish'?

Twenty years on, I now think that the reign of Glanglish/Globish will soon draw to a close, but not because something else will take its place.

The obvious candidate, Chinese, suffers from a huge problem: linguistic degeneracy. By which I mean that a single word - "shi", say - corresponds to over 70 different concepts if you ignore the tones. Even if you can distinguish clearly between the four tones - which few beginners can manage with much consistency - saying the word "shi" will still be much harder to interpret than a similarly-mangled English word, especially for non-native speakers. This makes it pretty useless as a lingua franca, which needs to be both easy to acquire, and easy to understand even by novices.

But something is happening that I hadn't allowed for two decades ago: machine translation. Just look at Google Translate, which I use quite a lot to provide rough translations of interesting stuff that I find on non-English language sites. It's pretty good, getting better - and free. I'm sure that Google is working on producing something similar for spoken language: imagine what a winner Google Voice Translate for Android would be.

So instead of Globish or Glanglish, I think that increasingly people will simply speak their own language, and let Google et al. handle the rest. In a way, that's great, because it will allow people to communicate directly with more or less anyone anywhere. But paradoxically it will probably also lead to people becoming more parochial and less aware of cultural differences around the globe, since few will feel the need to undergo that mind-expanding yet humbling experience of trying to learn a foreign language - not even Glanglish.

Follow me @glynmoody on Twitter or identi.ca.

Something in the Air: the Open Source Way

One of the most vexed questions in climate science is modelling. Much of the time the crucial thing is trying to predict what will happen based on what has happened. But that clearly depends critically on your model. Improving the robustness of that model is an important aspect, and the coding practices employed obviously feed into that.

Here's a slightly old but useful paper [.pdf] that deals with just this topic:

In this paper, we report on a detailed case study of the Climate scientists build large, complex simulations with little or no software engineering training, and do not readily adopt the latest software engineering tools and tech-niques. In this paper, we describe an ethnographic study of the culture and practices of climate scientists at the Met Office Hadley Centre. The study examined how the scientists think about software correctness, how they prioritize requirements, and how they develop a shared understanding of their models. The findings show that climate scientists have developed customized techniques for verification and validation that are tightly integrated into their approach to scientific research. Their software practices share many features of both agile and open source projects, in that they rely on self-organisation of the teams, extensive use of informal communication channels, and developers who are also users and domain experts. These comparisons offer insights into why such practices work.

It would be interesting to know whether the adoption of elements of the open source approach was a conscious decision, or just evolved.

Follow me @glynmoody on Twitter or identi.ca.

20 June 2010

Should Retractions be Behind a Paywall?

"UN climate panel shamed by bogus rainforest claim", so proclaimed an article in The Times earlier this year. It began [.pdf]:

A STARTLING report by the United Nations climate watchdog that global warming might wipe out 40% of the Amazon rainforest was based on an unsubstantiated claim by green campaigners who had little scientific expertise.

Well, not so unsubstantiated, it turns out: The Times has just issued a pretty complete retraction - you can read the whole thing here. But what interests me in this particular case is not the science, but the journalistic aspect.

Because if you went to The Times site to read that retraction, you would, of course, be met by the stony stare of the latter's paywall (assuming you haven't subscribed). Which means that I - and I imagine many people who read the first, inaccurate Times story - can't read the retraction there. Had it not been for the fact that among the many climate change sites (on both sides) that I read, there was this one with a copy, I might never have known.

So here's the thing: if a story has appeared on the open Web, and needs to be retracted, do newspapers like The Times have a duty to post that retraction in the open, or is acceptable to leave behind the paywall?

Answers on the back of yesterday's newspaper...

Follow me @glynmoody on Twitter or identi.ca.

Open Source Scientific Publishing

Since one of the key ideas behind this blog is to explore the application of the open source approach to other fields, I was naturally rather pleased to come across the following:

As a software engineer who works on open source scientific applications and frameworks, when I look at this, I scratch my head and wonder "why don't they just do the equivalent of a code review"? And that's really, where the germ of the idea behind this blog posting started. What if the scientific publishing process were more like an open source project? How would the need for peer-review be balanced with the need to publish? Who should bear the costs? Can a publishing model be created that minimizes bias and allows good ideas to emerge in the face of scientific groupthink?

It's a great question, and the post goes some way to sketching out how that might work in practice. It also dovetails nicely with my earlier post about whether we need traditional peer review anymore. Well worth reading.

Follow me @glynmoody on Twitter or identi.ca.

19 June 2010

Open Source: A Question of Evolution

I met Matt Ridley once, when he was at The Economist, and I wrote a piece for him (I didn't repeat the experience because their fees at the time were extraordinarily ungenerous). He was certainly a pleasant chap in person, but I have rather mixed feelings about his work.

His early book "Genome" is brilliant - a clever promenade through our chromosomes, using the DNA and its features as a framework on which to hang various fascinating facts and figures. His latest work, alas, seems to have gone completely off the rails, as this take-down by George Monbiot indicates.

Despite that, Ridley is still capable of some valuable insights. Here's a section from a recent essay in the Wall Street Journal, called "Humans: Why They Triumphed":

the sophistication of the modern world lies not in individual intelligence or imagination. It is a collective enterprise. Nobody—literally nobody—knows how to make the pencil on my desk (as the economist Leonard Read once pointed out), let alone the computer on which I am writing. The knowledge of how to design, mine, fell, extract, synthesize, combine, manufacture and market these things is fragmented among thousands, sometimes millions of heads. Once human progress started, it was no longer limited by the size of human brains. Intelligence became collective and cumulative.

In the modern world, innovation is a collective enterprise that relies on exchange. As Brian Arthur argues in his book "The Nature of Technology," nearly all technologies are combinations of other technologies and new ideas come from swapping things and thoughts.

This is, of course, a perfect description of the open source methodology: re-using and building on what has gone before, combining the collective intelligence of thousands of hackers around the world through a culture of sharing. Ridley's comment is another indication of why anything else just hasn't made the evolutionary jump.

Follow me @glynmoody on Twitter or identi.ca.

18 June 2010

German Publishers Want More Monopoly Rights

Here's an almost unbelievable piece about what's happening in Germany right now:

It looks as if publishers might really be lobbying for obtaining a new exclusive right conferring the power to monopolise speech e.g. by assigning a right to re-use a particular wording in the headline of a news article anywhere else without the permission of the rights holder. According to the drafts circulating in the internet, permission shall be obtainable exclusively by closing an agreement with a new collecting society which will be founded after the drafts have matured into law. Depending on the particulars, new levies might come up for each and every user of a PC, at least if the computer is used in a company for commercial purposes.

Well, obtaining monopoly protection for sentences and even parts of sentences in a natural language appears to be some kind of very strong meat. This would mean that publishers can control the wording of news messages. This comes crucially close to private control on the dissemination of facts.

But guess what? Someone thinks that German publishers aren't asking for *enough*, as the same article explains:

Mr Castendyk concludes that even if the envisaged auxiliary copyright protection for newspaper language enters into law, the resulting additional revenue streams probably would be insufficient to rescue the publishing companies. He then goes a step further and postulates that publishing companies enjoy a quasi-constitutional guarantee due to their role in the society insofar the state has the obligation to maintain the conditions for their existence forever.

...

Utilising the leveraging effect of this postulated quasi-constitutional guarantee, Castendyk demands to amend cartel law in order to enable a global 'pooling' of all exclusive rights of all newspaper publishers in Germany in order to block any attempt to defect from the paywall cartell by single competitor as discussed above.

This is a beautiful demonstration of a flaw at the heart of copyright: whenever an existing business model based around a monopoly starts to fail, the reflexive approach is to demand yet more monopolies in an attempt to shore it up. And the faster people point out why that won't solve the problem, the faster the demands come for even more oppressive and unreasonable legislation to try to head off those issues.

And make no mistake: if Germany adopts this approach, there will be squeals from publishers around the world demanding "parity", just as there have been with the term of copyright. And so the ratchet will be turned once more.

Follow me @glynmoody on Twitter or identi.ca.

EU's Standard Failure on Standards

Let's be frank: standards are pretty dull; but they are also important as technological gatekeepers. As the shameful OOXML saga showed, gaining the stamp of approval can be so important that some are prepared to adopt practically any means to achieve it; similarly, permitting the use of technologies that companies claim are patented in supposedly open standards can shut out open source implementations completely.

Against that background, the new EU report “Standardization for a competitive and innovative Europe: a vision for 2020” [.pdf] is a real disappointment. For something that purports to be looking forward a decade not even to mention “open source” (as far as I can tell) is an indication of just how old-fashioned and reactionary it is. Of course that omission is all of a piece with this attitude to intellectual monopolies:

The objective is to ensure licences for any essential IPRs contained in standards are provided on fair, reasonable and non-discriminatory conditions (FRAND). In practice, in the large majority of cases, patented technology has been successfully integrated into standards under this approach. On this basis, standards bodies are encouraged to strive for improvements to the FRAND system taking into consideration issues that occur over time. Some fora and consortia, for instance in the area of internet, web, and business process standards development have implemented royalty-free policies (but permitting other FRAND terms) agreed by all members of the respective organisation in order to promote the broad implementation of the standards.

This is clearly heavily biased towards FRAND, and clearly hints that royalty-free regimes are only used by those long-haired, sandal-wearing hippies out on the Well-Weird Web.

But as readers of this blog well know, FRAND is simply incompatible with free software; and any standard that adopts FRAND locks out open source implementations. That this is contemplated in the report is bad enough; that it is not even acknowledged as potential problem is disgrace. (Via No OOXML.)

Follow me @glynmoody on Twitter or identi.ca.

Can You Make Money from Free Stuff?

Well, of course you can – free software is the primary demonstration of that. But that doesn't mean it's trivial to turn free into fee. Here's an interesting move that demonstrates that quite nicely.

On Open Enterprise blog.

17 June 2010

Red Letter Day for ACTA in EU: Let's Use It

This week is one of the magic "plenary" ones in the European Parliament:

Only during the plenary weeks of June 14-17 and July 5-8 the MEPs will have an occasion to pass by the written declarations table, on their way to the plenary, to sign WD12 (at 12:00 on Tuesday, Wednesday and Thursday are the vote sessions, where every MEP should be present). It is therefore crucial that they are properly informed about the importance of signing it, right before moving to the plenary.

That WD12, to remind you:

Written declaration on the lack of a transparent process for the Anti-Counterfeiting Trade Agreement (ACTA) and potentially objectionable content

The European Parliament,

– having regard to Rule 123 of its Rules of Procedure,

A. whereas negotiations concerning the Anti-Counterfeiting Trade Agreement (ACTA) are ongoing,

B. whereas Parliament’s co-decision role in commercial matters and its access to negotiation documents are guaranteed by the Lisbon Treaty,

1. Takes the view that the proposed agreement should not indirectly impose harmonisation of EU copyright, patent or trademark law, and that the principle of subsidiarity should be respected;

2. Declares that the Commission should immediately make all documents related to the ongoing negotiations publicly available;

3. Takes the view that the proposed agreement should not force limitations upon judicial due process or weaken fundamental rights such as freedom of expression and the right to privacy;

4. Stresses that economic and innovation risks must be evaluated prior to introducing criminal sanctions where civil measures are already in place;

5. Takes the view that internet service providers should not bear liability for the data they transmit or host through their services to an extent that would necessitate prior surveillance or filtering of such data;

6. Points out that any measure aimed at strengthening powers of cross-border inspection and seizure of goods should not harm global access to legal, affordable and safe medicines;

7. Instructs its President to forward this declaration, together with the names of the signatories, to the Commission, the Council and the parliaments of the Member States.

So now would be a good time to contact your MEPs. If you want to find out who is still sitting on the naughty step as far as WD12 and ACTA is concerned, there's a good list from La Quadrature, complete with email and telephone numbers.

Follow me @glynmoody on Twitter or identi.ca.

15 June 2010

Are Software Patents Patently Dangerous Enough?

Earlier this week I wrote about a useful study of the economics of copyright, pointing out that we need more such analyses in order to adopt a more rational, evidence-based approach to drafting laws in this area. Of course, precisely the same can be said of patents, and software patents in particular, so it's always good to come across work such as this newly-published doctoral dissertation [.pdf]: “The effects of software patent policy on the motivation and innovation of free and open source developers.”

On Open Enterprise blog.

14 June 2010

Shame on Ofcom, Double Shame on the BBC

Readers with good memories may recall a little kerfuffle over an Ofcom consultation to slap DRM on the BBC's HD service:

if this scheme is adopted it is highly unlikely free software projects will be able to obtain the appropriate keys, for the simple reason that they are not structured in a way that allows them to enter into the appropriate legal agreements (not least because they couldn't keep them). Of course, it will probably be pretty trivial for people to crack the encryption scheme, thus ensuring that the law-abiding free software users are penalised, while those prepared to break the law are hardly bothered at all.

On Open Enterprise blog.

Abundance Obsoletes Peer Review, so Drop It

Recently, I had the pleasure of finally meeting Cameron Neylon, probably the leading - and certainly most articulate - exponent of open science. Talking with him about the formal peer review process typically employed by academic journals helped crystallise something that I have been trying to articulate: why peer review should go.

A recent blog post has drawn some attention to the cost - to academics - of running the peer review process:

So that's over £200million a year that academics are donating of their time to the peer review process. This isn't a large sum when set against things like the budget deficit, but it's not inconsiderable. And it's fine if one views it as generating public good - this is what researchers need to do in order to conduct proper research. But an alternative view is that academics (and ultimately taxpayers) are subsidising the academic publishing to the tune of £200 million a year. That's a lot of unpaid labour.

Indeed, an earlier estimate put the figure even higher:

a new report has attempted to quantify in cash terms exactly what peer reviewers are missing out on. It puts the worldwide unpaid cost of peer review at £1.9 billion a year, and estimates that the UK is among the most altruistic of nations, racking up the equivalent in unpaid time of £165 million a year.

Whatever the figure, it is significant, which brings us on to the inevitable questions: why are researchers making this donation to publishers, and do they need to?

The thought I had listening to Neylon talk about peer review is that it is yet another case of a system that was originally founded to cope with scarcity - in this case of outlets for academic papers. Peer review was worth the cost of people's time because opportunities to publish were rare and valuable and needed husbanding carefully.

Today, of course, that's not the case. There is little danger that important papers won't see the light of day: the nearly costless publishing medium of the Internet has seen to that. Now the problem is dealing with the fruits of that publishing abundance - making such that people can find the really important and interesting results among the many out there.

But that doesn't require peer review of the kind currently employed: there are all kinds of systems that allow any scientist - or even the general public - to rate content and to vote it up towards a wider audience. It's not perfect, but by and large it works - and spreads the cost widely to the point of being negligible for individual contributors.

For me what's particularly interesting is the fact that peer review is unnecessary for the same reason that copyright and patents are unnecessary nowadays: because the Internet liberates creativity massively and provides a means for bringing that flood to a wider audience without the need for official gatekeepers to bless and control it.

Follow me @glynmoody on Twitter or identi.ca.