09 June 2009

Microsoft's Pyrrhic Victory in the Netbook War

The rise of the netbook has been an extraordinary saga. When the Asus Eee PC was first launched at the end of 2007, it seemed to come from nowhere: there was no real precedent for such a low-cost, small machine, using solid state storage and running GNU/Linux. The brilliance of Asus's move was shown not just by the rapid uptake of this new form-factor, but also the high level of satisfaction – the only element viewed less positively was the small size of keyboard, an inevitable consequence of the design....

On Open Enterprise blog.

08 June 2009

China's Censorware: What about GNU/Linux?

News is breaking that the Chinese government will insist on censorware being shipped with all PCs:

China plans to require that all personal computers sold in the country as of July 1 be shipped with software that blocks access to certain Web sites, a move that could give government censors unprecedented control over how Chinese users access the Internet.

The government, which has told global PC makers of the requirement but has yet to announce it to the public, says the effort is aimed at protecting young people from "harmful" content. The primary target is pornography, says the main developer of the software, a company that has ties to China's security ministry and military.

There's more background information and discussion about the quaintly-named "Green Dam Youth Escort" here, including a link to the software itself.

This turns out - surprise, surprise, to be a Windows executable, which raises a question: what will the Chinese government do about GNU/Linux? Will they simply ignore that platform, or insist that a GNU/Linux version be developed?

And what happens if one day the use of that software becomes mandatory (it seems voluntary at the moment - but we all know how these things are the thin end of the wedge)?

How will the authorities in China - and, ultimately, elsewhere - cope with the freedom built into GNU/Linux? Will GNU/Linux one day become illegal in those parts of the world benighted enough to mandage online censorship?

Follow me @glynmoody on Twitter or identi.ca.

07 June 2009

Creative Commons, We Have a Problem

I'm a big fan of the Creative Commons movement. But it has a big problem: few people have heard of it according to a survey conducted on behalf of the UK's Office of Public Sector Information (OPSI).

In the survey, people were shown one of the standard CC logos (like the one at the foot of this page). Here's what they found:


75% of respondents did not recognise this image.

Lack of recognition was highest amongst the “general public” – 87%. And lowest amongst respondents from the OPSI website – 55% did not recognise the image.

The majority did not understand the meaning of the image. Understanding was highest amongst the OPSI website respondents – 35%.

This is not surprising as this group was also the group in which the most had heard of Creative Commons licences before – 47% (vs 10% of the “general public” and 29% of the OPSI database). Only those likely to be more familiar with copyright (inferred from their route to the survey) are likely to have a previous understanding of Creative Commons terminology and imagery. One might argue that if these are used moving forward, more people will become more familiar with these, however, the benefits at this stage of shared/added meaning would only really apply to a minority – a minority who are likely to have a strong understanding of Crown copyright already.

It looks like much more work needs to be done to get the message out about Creative Commons and its licences.

06 June 2009

Fashion Industry Repeats Software's Mistakes

Software patents are stupid on both theoretical and practical grounds. Since software is just algorithms - that is, maths - software patents are intellectual monopolies on pure knowledge. Practically, they make coding almost impossible, since software patents have been given for so many trivial and common programming techniques.

Unbelievably, it looks like the fashion industry is going to allow big business to impose something similar there:


Let’s say we help you produce this line, you sell it and make your pile crumbs. Then -thanks to the influence of the Council of Fashion Designers of America (CFDA, membership by invitation only) and Congress- somebody can come out of the woodwork and claim it is their design, they own it and now you owe them. If they registered the design and you didn’t know it, this could be perfectly legal. Of course you didn’t copy them but it won’t matter. The fact that society designers have been copying nameless unknown independent designers for years doesn’t even register. Even Diane Von Furstenberg, the leading champion of this bill recently got caught doing it. Because you don’t have any money, this party will sue everyone in your production and retail chain. That means pattern makers, contractors and the stores who bought your stuff. So in the interests of avoiding law suits, any service provider is going to require you prove you own it. It’s even worse for retail buyers who face potential criminal prosecution for dealing in pirated goods. Everybody who helps you or buys from you is going to require you to prove ownership of your concept before they’ll have anything to do with it. If wealthy society designers like Diane Von Furstenberg have their way, this could become an unfortunate reality. Paradoxically, CFDA is telling Congress they’re protecting you.

The parallels with software are clear: the use of lawyers to bully smaller companies who employ software coding techniques that are obvious but have been wrongly granted patents in some jurisdictions.

The only consolation is that if this legislation is passed, and the fashion industry goes into meltdown, the obvious difficulties there will help legislators understand why software patents are such a stupid idea at all levels.

Follow me @glynmoody on Twitter or identi.ca.

05 June 2009

Open Source Sensing Initiative

Here's another interesting initiative: open source sensing.

Pervasive sensing is arriving soon — we have a short window of opportunity for guiding this technology to protect both our security *and* our privacy.

This is an open source-style project with the goal of bringing the benefits of a bottom-up, decentralized approach to sensing for security and environmental purposes.

The intent of the project is to take advantage of advances in sensing to improve both security and the environment, while preserving — even strengthening — privacy, freedom, and civil liberties.

We have a unique opportunity to steer today's emerging sensing/surveillance technologies in positive directions, before they become widespread.

What's particularly noteworthy is the fact that open source sensing is seen as a way of offering security while dealing with various threats to privacy and freedom that sensor technologies obviously present. Openness may help square the circle here, is the hope.

Keep the Libel Laws out of Science

UK libel laws are famously unbalanced, and allow the rich and powerful to bully challengers who have truth on their side. That's bad enough, but when it crimps the practice of science itself, as here, it's even worse:

The use of the English libel laws to silence critical discussion of medical practice and scientific evidence discourages debate, denies the public access to the full picture and encourages use of the courts to silence critics. The British Chiropractic Association has sued Simon Singh for libel. The scientific community would have preferred that it had defended its position about chiropractic through an open discussion in the medical literature or mainstream media.

On 4th June 2009 Simon Singh announces that he is applying to appeal the judge's recent pre-trial ruling in this case, in conjunction with the launch of this support campaign to defend the right of the public to read the views of scientists and writers.

He needs our help:

Join the campaign! In a statement published on 4th June 2009, over 100 people from the worlds of science, journalism, publishing, comedy, literature and law have joined together to express support for Simon and call for an urgent review of English law of libel. Please help us with this campaign, sign the statement and tell everyone you know to sign it. With every additional 1000 names we will be sending the statement again to Government until there is a commitment and a timetable from the parties for the necessary legislation.

Please help fight for the right to conduct science freely.

Happy Birthday, Mozilla - and Thanks for Being Here

Seven years ago, Mozilla 1.0 was launched:

Mozilla.org, the organization that coordinates Mozilla open-source development and provides services to assist the Mozilla community, today announced the release of Mozilla 1.0, the first major-version public release of the Mozilla software. A full-fledged browser suite based on the latest Internet standards as well as a cross-platform toolkit, Mozilla 1.0 is targeted at the developer community and enables the creation of Internet-based applications. Mozilla 1.0 was developed in an open source environment and built by harnessing the creative power of thousands of programmers and tens of thousands of testers on the Internet, incorporating their best enhancements.

On Open Enterprise blog.

04 June 2009

Intel buys Wind River: the End of the Wintel Duopoly?

This is big:

Intel Corporation has entered into a definitive agreement to acquire Wind River Systems Inc, under which Intel will acquire all outstanding Wind River common stock for $11.50 per share in cash, or approximately $884 million in the aggregate. Wind River is a leading software vendor in embedded devices, and will become part of Intel's strategy to grow its processor and software presence outside the traditional PC and server market segments into embedded systems and mobile handheld devices. Wind River will become a wholly owned subsidiary of Intel and continue with its current business model of supplying leading-edge products and services to its customers worldwide.

On Open Enterprise blog.

Knuth: Every Algorithm is Sacred

One of my computer heroes, Donald Knuth, has sent a message to the head of the EPO, hoping to convince her that every algorithm is sacred, and should not be delivered up to become the personal, exclusive, proprietary possession of any one person or company:

Basically I remain convinced that the patent policy most fair and most suitable for the world will regard mathematical ideas (such as algorithms) to be not subject to proprietary patent rights. For example, it would be terrible if somebody were to have a patent on an integer, like say 1009, so that nobody would be able to use that number "with further technical effect" without paying for a license. Although many software patents have unfortunately already been granted in the past, I hope that this practice will not continue in future. If Europe leads the way in this, I expect many Americans would want to emigrate so that they could continue to innovate in peace!

Follow me @glynmoody on Twitter or identi.ca.

This is the Future: the Grid Meets the Grid

Wow, this is cool:


At first glance it’s hard to see how the open-source software framework Hadoop, which was developed for analyzing large data sets generated by web sites, would be useful for the power grid — open-source tools and utilities don’t often mix. But that was before the smart grid and its IT tools started to squeeze their way into the energy industry. Hadoop is in fact now being used by the Tennessee Valley Authority (TVA) and the North American Electric Reliability Corp. (NERC) to aggregate and process data about the health of the power grid, according to this blog post from Cloudera, a startup that’s commercializing Hadoop.

The TVA is collecting data about the reliability of electricity on the power grid using phasor measurement unit (PMU) devices. NERC has designated the TVA system as the national repository of such electrical data; it subsequently aggregates info from more than 100 PMU devices, including voltage, current, frequency and location, using GPS, several thousand times a second. Talk about information overload.

But TVA says Hadoop is a low-cost way to manage this massive amount of data so that it can be accessed all the time. Why? Because Hadoop has been designed to run on a lot of cheap commodity computers and uses two distributed features that make the system more reliable and easier to use to run processes on large sets of data.

What's interesting about this - aside from seeing yet more open source deployed in novel ways - is that it presages a day when the physical grid of electicity and its users are plugged into the digital grid, to allow massive real-time analysis of vast swathes of the modern world, and equally real-time control of it across the grid. Let's hope they get the security sorted out before then...

Follow me @glynmoody on Twitter or identi.ca.

Of Open Standards, Interoperability and Open Source

One of the key moments in the rise of open source was when Massachusetts announced that it was adopting an open standards policy for documents. Since this was a gauntlet flung down for the dominant supplier in this space, Microsoft, it was inevitable that a battle of epic proportions would result. In fact, it turned out to be a very dirty fight, degenerating into ad hominem attacks on the person behind this move to open standards. In some ways, it was a prelude to the equally ugly struggle that took place over Microsoft's attempts to ram its OOXML standard through the ISO process – another important moment in the rise of open standards....

On Open Enterprise blog.

DNA Database Breached in New Zealand

Yesterday, I wrote about how the UK ID database has been breached even before it formally exists; now here's another tale that shows what the problem with all such super-duper databases is:


Police are investigating a claim an Environmental Science and Research worker made an "inappropriate disclosure" from the DNA databank.

ESR said yesterday a criminal investigation had started. "A staff member has been suspended pending the outcome of the police and internal investigations," a spokeswoman said.

Which means that *every* database, ultimately, has a weak link: people. So all these assurances of cast-iron, unbreakable security are worthless, for the simple reason that these databases are designed to be used by people, not all of whom are trustworthy or unblackmailable....

03 June 2009

ID Database Breached Even Before It Exists

Well, I was expecting this, but not so soon:

A Glasgow council worker was sacked and another resigned after they were caught snooping into the core database of the Government's Identity Card scheme.

The two Glasgow staff were caught snooping on people in the Department for Work and Pensions (DWP) Customer Information Systems (CIS) database, which includes among its 85 million records the personal details about everyone in the UK, and which the Identity and Passport Service plans to use as the foundation of the national ID scheme.

"A member of staff tried to access stuff about famous figures," said a spokesman for Glasgow City Council. He said the DWP alerted the council about the breach. He refused to name the celebrity or say how the council dealt with the matter.

The INQ has learned, however, that the staffer caught looking up personal data belonging to celebrities was sacked.

Whether they were resigned or sacked is neither here nor there: it represents no deterrent whatsoever.

As if that's not bad enough, try this:

"The small number of incidents shows that the CIS security system is working," he added.

Er, no: it just means that you've only *caught* two of them, and that the other n, where n may be a large and growing number, have got away with it so far....

Let's just hope Labour continues its entertaining meltdown before it can bring its insane ID card/database plans to total "fruition" - for the identity thieves and blackmailers.

Follow me @glynmoody on Twitter or identi.ca.

Standing up to the Playground Bully

The EU is contemplating some further action against Microsoft:

Frustrated with past efforts to change Microsoft Corp.'s behavior, European Union regulators are pursuing a new round of sanctions against the software giant that go well beyond fines.

The regulatory push is focused on a longstanding complaint against Microsoft: that it improperly bundles its Web browser with its Windows software. Rather than forcing Microsoft to strip its Internet Explorer from Windows, people close to the case say, the EU is now ready to try the opposite measure: Forcing a bunch of browsers into Windows, thus diluting Microsoft's advantage.

The sanctions would come from an EU investigation that began last year. In a sign of how rapidly the case is progressing, these people say, the possible penalty has emerged as a key focus in discussions between the parties.

Inevitably, this suggestion has led to whining about how nasty those eurocrats are, and how unfair to pick on little old Microsoft, and what a crimp on innovation all this is.

How utterly pathetic.

What we are seeing is teacher starting to get heavy with the playground bully - one who, despite a decade of warnings, continues to abuse its monopoly position. What we are seeing is an institution that finally has both the will and means to place limits on what are acceptable business practices.

Of course, forcing Microsoft to give people a choice of browsers when they start up Windows will make little difference to the that market, but that's not the point. The point is the punishment - a further reminder that Microsoft is under scrutiny, and that further serious financial sanctions are always an option. It's absolutely the *right* thing to do, because Microsoft's behaviour for the last two decades has been absolutely the *wrong* thing to do, and it is finally being called to account.

Follow me @glynmoody on Twitter or identi.ca.

Why Chemical Software Will be Open Source

Here's an important post from Mr Open Chemistry, Peter Murray-Rust:


“Chemical software will be Open Source”

This statement expresses both a simple truth (Simple Future, see WP) and an aspiration (Coloured Future – Software shall be free). The latter is what I have been advocating on this blog – the moral, pragmatic, utilitarian value of Open Source. The former simply states that it will happen. IOW a betting person could lay a wager.

The heart of Peter's argument is this:

there is a particular aspect to “Chemoinformatics” - the software that supports the management of chemical compounds, reactions and their measured and computed properties:

There have been no new developments in the last decade

What I mean by this is that there have been no new algorithms or information management strategy to have come out of commercial chemoinformatics manufacturers. Chemical search, heuristic properties and fingerprints, molecule docking are “solved” problems. And advance comes from packaging, integration and parameter_tweaking/machine_learning. Only the last adds to science and since the commercial manufacturers are secretive then we can’t measure this (and I believe this to be mainly pseudoscience in its practice – you can make extravagant plans without independent assessment). So the advances from the manufacturers have been engineering – ease of use, deployability, interoperation with third-party software – but not functionality.

So the Open Source community – the Blue Obelisk – is catching up. I believe that OSCAR is already the best chemical language processing tool, that OPSIN will soon be as good as any commercial name2structure parser and that OSRA will do the same for chemical images.

What this essentially means is that chemoinformatics has become commoditised; and as history has shown us time and again, once that happens, the advantages of open source in terms of aggregated, distributed development kick in. It is proprietary software that does not scale - ironically, given the prevailing wisdom to the contrary - and which therefore always falls behind open source projects once a particular domain has matured.

This is not to say that free software never innovates, as I've discussed elsewhere; simply that in new sectors open source's advantages are less clear than they are in mature ones. Peter's point is that chemoinformatics in particular is ripe for open source to produce better versions of existing tools; and the implication is that as successive areas of science software become similarly mature, so free software offerings will move in and ultimately take over.

Follow me @glynmoody on Twitter or identi.ca.

The Internet Maybe Not be a Right, but is Certainly Essential

Last month, Viviane Reding ruffled a few feathers when she stated that “Internet access is a fundamental right”. As it happens, I'd written the same thing, albeit with rather less authority, last year. Obviously, that's a very strong statement, because it implies that taking away an Internet connection is an infringement of that right – which means, in its turn, that the “three strikes and you're out” is a grossly disproportionate punishment for copyright infringement....

On Open Enterprise blog.

Big Open Access Win in UK

Great news:

University College London is set to become the first of the top tier of elite European universities to make all its research available for free at the click of a mouse, in a model it hopes will spread across the academic world.

UCL’s move to “open access” for all research, subject to copyright law, could boost the opportunities for rapid intellectual breakthroughs if taken up by other universities, thereby increasing economic growth.

Paul Ayris, head of the UCL library and an architect of the plan to put all its research on a freely accessible UCL website, said he had backed open access because the existing system of having to visit a library or pay a subscription fee to see research in journals erected “barriers” to the use of research. “This is not good for society if you’re looking for a cure for cancer,” he said.

What's pathetic is that some people are *still* spreading the FUD:

Martin Weale, director of the National Institute of Economic and Social Research, said: “If you read something in the American Economic Review, there’s a presumption that its quality has been examined with great care, and the article isn’t rubbish. But if you have open access, people who are looking for things ... will find it very difficult to sort out the wheat from the chaff.”

Hey, Martin, as you should know, open access and peer review are completely different things. The open access material at UCL can still be published in peer reviewed journals - including those that are also open access - in order "to sort the wheat from the chaff". The point is that *anyone* can access all the materials at any time - not just when publishers allow it upon payment of exorbitant fees.

Moreover, I seem to recall that there's this cute little company called Google that's pretty good at pointing people to content on the Web. And that's partly the point: once stuff is open access, all sorts of clever ways of finding it and using it are possible - and that's rarely true for traditional scientific publishing. (Via Mike Simons.)

Follow me @glynmoody on Twitter or identi.ca.

02 June 2009

Why Open Source isn't Tiddly for BT

I'd come across TiddlyWiki before, but never really got what it was about....

On Open Enterprise blog.

Anathematising Abject, Apologetic Asus

I've always praised Asus for coming up with their innovative Eee PC form factor, and for really building on the strengths of GNU/Linux; no more, after a pusillanimous display of abjection before Microsoft.

A day after an Asustek Eee PC running Google's Android operating system was shown at Computex Taipei, top executives from the company said the project will be put on the backburner for now.

That, on its own, would be fair enough - after all, Android clearly is still somewhat rough at the edges. No, the problem is this:

Moments after sharing a news conference stage with Intel executive vice president Sean Maloney and Microsoft corporate vice president, OEM Division, Steven Guggenheimer, the chairman of Asustek, Jonney Shih, demurred when asked about the Android Eee PC.

"Frankly speaking, the first question, I would like to apologize that, if you look at Asus booth we've decided not to display this product," he said. "I think you may have seen the devices on Qualcomm's booth but actually, I think this is a company decision so far we would not like to show this device. That's what I can tell you so far. I would like to apologize for that."

He apologised? For daring to show an Android Eee PC, when one of the main functions of shows is to stake out the high ground for future projects?

And just to insult our intelligences a little further:

When asked about rumors that Asustek faced pressure from Microsoft and Intel over the use of Android and Snapdragon in the Eee PC, Tsang said "no, pressure, none."

Riiiiiiiiight: no, pressure, none - perhaps he should have read his Hamlet (Act III, Scene II) a little more closely. If there was no pressure, why on earth did he apologise, making himself and his company look awkward? - it just doesn't make sense.

Anyway, that's it, I hereby anathematise Asus, and cast it into the nethermost abyss. I shan't be buying any more Asus machines (we have two, and I was about to buy another), and I shall be strongly recommending that others avoid them too since the company is clearly not in control of its own destiny....

Follow me @glynmoody on Twitter or identi.ca.

01 June 2009

Why Scientific Software Wants To Be Free

Not sure if I missed this earlier, but it strikes me as a hugely important issue that deserves a wider audience whether or not it is brand new:

Astronomical software is now a fact of daily life for all hands-on members of our community. Purpose-built software for data reduction and modeling tasks becomes ever more critical as we handle larger amounts of data and simulations. However, the writing of astronomical software is unglamorous, the rewards are not always clear, and there are structural disincentives to releasing software publicly and to embedding it in the scientific literature, which can lead to significant duplication of effort and an incomplete scientific record.

We identify some of these structural disincentives and suggest a variety of approaches to address them, with the goals of raising the quality of astronomical software, improving the lot of scientist-authors, and providing benefits to the entire community, analogous to the benefits provided by open access to large survey and simulation datasets. Our aim is to open a conversation on how to move forward.

We advocate that: (1) the astronomical community consider software as an integral and fundable part of facility construction and science programs; (2) that software release be considered as integral to the open and reproducible scientific process as are publication and data release; (3) that we adopt technologies and repositories for releasing and collaboration on software that have worked for open-source software; (4) that we seek structural incentives to make the release of software and related publications easier for scientist-authors; (5) that we consider new ways of funding the development of grass-roots software; (6) and that we rethink our values to acknowledge that astronomical software development is not just a technical endeavor, but a fundamental part of our scientific practice.

Leaving aside the obvious and welcome element of calling for an open source approach (and, presumably, open source release if possible), there is deeper issue here: the fact that astronomy - and by extension, all science - is increasingly bound up with software, and that software is no longer an incidental factor in its practice.

A consequence of this is that as software moves ever-closer to the heart of the scientific process, so the need to release that code under free software licences increases. First, so that others can examine it for flaws and/or reproduce the results it produces. And secondly, so that other scientists can build on that code, just as they build on its results. In other words, it is becoming evident that open source is indispensable for *all* science, and not just the kind that proudly preclaims itself open.

Women in Open Source: the Definitive Resource

A couple of months ago, I was asking "Where are the alpha *female* hackers?" I received various helpful answers, albeit rather few of them. Here's a rather fuller answer to my question: the June 2009 edition of Open Source Business Resource, devoted entirely to Women in Open Source:

Whether you look at industry studies, online articles, or perhaps even around your own company, you'll see that women make up a small percent of the people working in free/libre and open source software (F/LOSS). Over the years there's been a growing interest in why so few women participate in this rapidly growing community and, more importantly, what can be done to help encourage more participation. Fortunately, members of the community - both male and female - are actively ramping up their efforts to attract more women to the F/LOSS community.

Resources such as LinuxChix.org, the Geek Feminism Wiki, as well as publications, blogs, and articles written by and about women, draw attention to this growing, influential group of F/LOSS participants. Events, such as the Women in Open Source track at the Southern California Linux Expo, help women network and connect with other members of the F/LOSS community, while also increasing their visibility.

In this issue of the Open Source Business Resource, innovative, energetic women discuss their specific projects, what other women in the field are doing, and their efforts to promote F/LOSS to people within their communities and internationally.

Without doubt, this is now the best place to begine exploring this area: great work.

Why Security by Obscurity Fails, Part 674

Great story in Wired about a master lock-picker, opening what are supposedly the most secure locks in the world:

These were the same Medeco locks protecting tens of thousands of doors across the planet

...

One by one, brand-new Medeco locks were unsealed. And, as the camera rolled, one by one these locks were picked open. None of the Medeco3 locks lasted the minimum 10 to 15 minutes necessary to qualify for the "high security" rating. One was cracked in just seven seconds. By Roberson's standards, Tobias and Bluzmanis had done the impossible.

Although these are physical, rather than software locks, the lesson is the same: there is no such thing as an unpickable lock, there is no such thing as unhackable software, even if it's closed and encrypted. Since *someone* will be able to find the flaws in your software, you may as well open it open so that they can be found and fixed. Go open source.

31 May 2009

Open Government: the Latest Member of the Open Family

One of the most exciting developments in the last few years has been the application of some of the core ideas of free software and open source to completely different domains. Examples include open content, open access, open data and open science. More recently, those principles are starting to appear in a rather surprising field: that of government, as various transparency initiatives around the world start to gain traction....

On Linux Journal.

30 May 2009

How Open Source Will Save the World (Really)

Apparently, I'm not the only one to think that open source will save the world - literally:


Could open source software save the planet? Steven Chu, the US energy secretary, says it can certainly help, by making it easier for all countries to access tools to design and build more energy-efficient buildings.

More specifically:

[It] should be open source software, so companies can add to that and put whatever they want on it but then you get this body of programmes, computer aided design programmes, as a way of helping design much more efficient buildings.

Now if we develop this with China together so that this intellectual property is co-owned, co-developed, free to be used by each country, you get away from this internal discussion of ‘we’re not going to do anything until we get free IP’.

Fantastic to see someone with considerable power making the connection between intellectual monopolies and the problem of mitigating climate change - and seeing that open source is a practical way to get around the problem.

29 May 2009

Why the “Copycats?” Report has a Copycat Problem

Along with death and taxes, one of the other certainties in life is the constant flow of reports from the media industries claiming that copyright infringement is causing them to lose billions of pounds of revenue each year, and that they will inevitably go to the wall if even harsher legal sanctions against infringement are not brought in (although, strangely, they have been saying this for about 10 years now, and they seem not to have gone bust yet....)

Of course, you might expect industries to paint the situation as bleak as possible – that's why they spend large chunks of their considerable revenues on expensive PR companies and lobbyists to “sex” things up a bit. But there are other kinds of reports, typically sponsored by national government departments, that claim to provide more objective information about what is happening in this field.

Sadly, those expectations of objectivity are not always fulfilled. The most blatant example occurred just this week, when some fine digging by Michael Geist showed that a report from The Conference Board of Canada, which purports to be an independent research institute, not only copied text verbatim from the International Intellectual Property Alliance (the primary film, music, and software lobby in the U.S.), but also used figures from an old Canadian Recording Industry Association press release to justify dramatic statements like the following:

As a result of lax regulation and enforcement, internet piracy appears to be on the increase in Canada. The estimated number of illicit downloads (1.3 billion) is 65 times higher than the number legal downloads (20 million), mirroring the Organisation for Economic Cooperation and Development’s conclusion that Canada has the highest per capita incidence of unauthorized file-swapping in the world.

As Geist points out:

While the release succeeded in generating attention, the report does not come close to supporting these claims. The headline-grabbing claim of 1.3 billion unauthorized downloads relies on a January 2008 Canadian Recording Industry Association press release. That release cites a 2006 Pollara survey as the basis for the statement. In other words, the Conference Board relies on a survey of 1200 people conducted more than three years ago to extrapolate to a claim of 1.3 billion unauthorized downloads (the survey itself actually ran counter to many of CRIA's claims).

After stupidly trying to defend this indefensible position, The Conference Board of Canada has now backed down, admitted that the report plagiarised material, and withdrawn it, along with two others.

Against that background, the appearance of the report “Copycats? Digital consumers in the online age”, produced by University College London's CIBER for the UK governmnent's Strategic Advisory Board for Intellectual Property Policy (what a name) takes on an added significance. Among other questions, one issue is to what extent the report manages to look objectively at the facts, rather than blithely accepting the highly-partial views of the media industry itself.

The 85-page report is detailed, and as you might expect from an academic outfit, is fully referenced, which is excellent. I strongly recommend that anyone interested in this important field read the whole thing. But for those of you slightly more pressed for time, the executive summary gives a good flavour of its approach:

The backdrop to our research on online consumer behaviour – and the impacts and implications this has on legal practice, the content industries, and governmental policy – is one of vast economic losses brought about by widespread unauthorised downloading and a huge confusion about (or denial of) the definition of what is and is not legal and copyright protected. Industry reports suggest that at least seven million British citizens have downloaded unauthorised content, many on a regular basis, and many also without ethical consideration. Estimates as to the overall lost revenues if we include all creative industries whose products can be copied digitally, or counterfeited, reach £10 billion (IP Rights, 2004), conservatively, as our figure is from 2004, and a loss of 4,000 jobs. This is in the context of the “Creative Industries” providing around 8% of British GDP. And the situation is not solely a British problem, but a global one. Downloading culture (Altschuller, 2009: unpaginated) “has forced society into a muddle of uncertainty with how to incorporate it into existing social and legal structures.” Altschuller adds that: “...music downloading has become part and parcel of the social fabric of our society despite its illegal status,” (emphasis added).

Just to make it clear - this is not simply an issue of music and film downloads alone. Software losses due to what is often described as “piracy” were, for example, $48 billion worldwide in 2007 (BSA, 2007); and in the UK the figure was $1,837,000 or approximately £1.25 billion. An exploratory CIBER investigation found vast quantities of films, music, software, e-books, games and television content available to download and share without cost. On one peer-to-peer network we found that at midday on a weekday there were 1.3 million users, sharing content. If each “peer” from this network (not the largest) downloaded one file per day the resulting number of downloads (music, film, television, e-books, software and games were all available) would be 473 million items per year. If the figure for each individual is closer to five or more items per day, the lowest estimate of downloaded material (remembering that the entire season of the Fox television series “24”, or the “complete” works of the rock group Led Zeppelin can be one file) is just under 2.4 billion files. And if the average value of each file is £5 – that is a rough low average of the price of a DVD or CD, rather than the higher prices of software or E-books – we have the online members of one file sharing network consuming approximately £12 billion in content annually – for free. These figures are staggering.

Staggering indeed – and complete piffle.

Sadly, the basic problem with the whole report is made clear in the first line of the above extract: “The backdrop to our research on online consumer behaviour – and the impacts and implications this has on legal practice, the content industries, and governmental policy – is one of vast economic losses brought about by widespread unauthorised downloading”. That is, it starts from an *assumption* that unauthorised downloading is causing economic damage, rather than examining whether that is the case: clearly, this is going to skew all the results of the research, because it is seen through a particular optic – that of the media industries.

But wait, you may say: maybe that is simply a statement of what the report found out during its objective research, which is a fair point. So let's just look at the figures cited above, and their source.

Estimates as to the overall lost revenues if we include all creative industries whose products can be copied digitally, or counterfeited, reach £10 billion (IP Rights, 2004), conservatively, as our figure is from 2004, and a loss of 4,000 jobs.

A quick check in the references gives the following:

IP Rights (2004) UK Tackles counterfeiting and piracy – launch of national IP Crime Strategy, Alert, Press Release, August 2004, issue 156. Available online at: www.iprights.com/publications/Alert_156.pdf (accessed 02.03.09)

So that £10 billion figure actually comes from a press release from the UK government itself – looking good so far. But why don't we check up on what exactly that reference says? Here's the relevant paragraph:

there has been a growing recognition of the economic impact of IP crime. Rights owners have estimated that last year alone counterfeiting and piracy cost the UK economy £10 billion and 4,000 jobs.

There we have it: that cast-iron, irreproachable, UK government-guaranteed £10 billion figure is merely – you guessed it – an estimate from the media industries themselves. In other words, there is no fundamental difference here from the situation with the Canadian report, except – importantly – there is no attempt to hide the connection (provided you are prepared to follow the links). But the figure is essentially worthless: it was produced by the media industries to justify their perennial wails of anguish.

That's one pretty obvious way in which the whole context for the report is biased by the media industries' agenda. But the problems go deeper, and operate in a more subtle way. Consider, for example, this passage from the paragraphs above:

On one peer-to-peer network we found that at midday on a weekday there were 1.3 million users, sharing content. If each “peer” from this network (not the largest) downloaded one file per day the resulting number of downloads (music, film, television, e-books, software and games were all available) would be 473 million items per year. If the figure for each individual is closer to five or more items per day, the lowest estimate of downloaded material (remembering that the entire season of the Fox television series “24”, or the “complete” works of the rock group Led Zeppelin can be one file) is just under 2.4 billion files. And if the average value of each file is £5 – that is a rough low average of the price of a DVD or CD, rather than the higher prices of software or E-books – we have the online members of one file sharing network consuming approximately £12 billion in content annually – for free.

Again, this is riddled with unexamined assumptions that are taken straight from the media industries framing of the situation. For example:

“if the average value of each file is £5 – that is a rough low average of the price of a DVD or CD, rather than the higher prices of software or E-books”

This assumes many things. For example, that all these downloaded files represent real losses for the media industries – that people would have bought this stuff had it not been available online. This is a huge jump to make: it's just as likely that people are trying out stuff they would never have looked at if they needed to pay for it. In this case, it's closer to marketing materials, providing powerful free advertising. As other research indicates, people who download unauthorised material from the Internet are *more* likely to buy stuff afterwards.

Then there is the pricing - “average value of each file is £5”. This simply accepts that the pricing the media industries are trying to impose on online customers is reasonable. And yet economics teaches us that the price of goods tends to fall to their marginal cost – zero in this case. The media industries are basically trying to defy the laws of economic gravity: that prices of digital goods will inevitably fall to earth (reference is made to this later in the report, but the authors don't seem to have understood the implications).

It's the same with software: what illegal downloading has shown is that the prices Microsoft and other proprietary companies have tried to obtain for software are, in fact, unjustifiable, and essentially unenforceable in an online world. Paying several hundred pounds for something that costs effectively zero to manufacture and distributed is rightly perceived as unfair by consumers – which is why they feel little compunction in downloading it. Costs of development need to be recouped, but that doesn't justify the kind of price-gouging that has been going on in the software industry for the last few decades (just think of the profit margins that companies like Microsoft have achieved historically).

The difference is, people now have a way of manifesting their discontent by using open source, or through unauthorised downloads (which Bill Gates has admitted he prefers to the former option). And, of course, free software, as produced by open source companies, shows that there are other ways of developing code – be it drawing upon a wider community of volunteer programmers, or selling support etc. - without charging high prices for goods with zero marginal cost.

In fact, there is a larger point here, completely missed in the report, as far as I can see. One reason why people have few qualms about downloading copyrighted material – that lack of “ethical consideration” the report refers to above - is that there is growing realisation that copyright law as currently construed is totally tilted in favour of businesses. Copyright term – originally *fourteen* years – is now effectively forever, thanks to constant extensions. The basic social compact that a creative work was granted a short-term, government-backed monopoly in return for placing that work in the public domain soon afterwards has been betrayed: people effectively give lots and get nothing. Illegal downloads are a way of striving for a fairer balance in a world where the laws are completely skewed against ordinary citizens, who have no other way of obtaining a more equitable approach.

Again, the report does not reflect this, but silently assumes that copyright is good in itself, and is working as it should. In fact, the massive flouting of the law is proof that this is not the case: when “[i]ndustry reports suggest that at least seven million British citizens have downloaded unauthorised content, many on a regular basis, and many also without ethical consideration”, then clearly today's copyright system is badly – perhaps irremediably - broken.

So while “Copycats” is to be welcomed as an honest effort to report on the true situation of unauthorised downloads in the UK, it fails at a profound level by overlooking its own deep, ingrained biases, fruits of years of relentless propaganda campaigns waged by the media industries and their lobbyists. Ironically, then, it would seem that the authors of the “Copycats” report, which delineates a wired-up Britain permeated by the “copycat” tendency in the realm of digital artefacts, are themselves unconscious copycats, albeit of a different, more rarefied kind, in the realm of ideas.

Follow me @glynmoody on Twitter or identi.ca.