15 October 2009

Open Sourcing America's Operating System

And how do you do that? By making all of the laws freely available - and, presumably, searchable and mashable:

Public.Resource.Org is very pleased to announce that we're going to be working with a distinguished group of colleagues from across the country to create a solid business plan, technical specs, and enabling legislation for the federal government to create Law.Gov. We envision Law.Gov as a distributed, open source, authenticated registry and repository of all primary legal materials in the United States.

This is great news, because Carl Malamud - the force behind this initiative - has been urging it for years: now it looks like it's beginning to take a more concrete form:

The process we're going through to create the case for Law.Gov is a series of workshops hosted by our co-conveners. At the end of the process, we're submitting a report to policy makers in Washington. The process will be an open one, so that in addition to the main report which I'll be authoring, anybody who wishes to submit their own materials may do so. There is no one answer as to how the raw materials of our democracy should be provided on the Internet, but we're hopeful we're going to be able to bring together a group from both the legal and the open source worlds to help crack this nut.

I particularly liked the following comment:

Law.Gov is a big challenge for the legal world, and some of the best thinkers in that world have joined us as co-conveners. But, this is also a challenge for the open source world. We'd like to submit such a convincing set of technical specs that there is no doubt in anybody's mind that it is possible to do this. There are some technical challenges and missing pieces as well, such as the pressing need for an open source redaction toolkit to sit on top of OCR packages such as Tesseract. There are challenges for librarians as well, such as compiling a full listing of all materials that should be in the repository.

What's interesting is that this recognises that open source is not just an inspiration, but a key part of the solution, because - like the open maths movement I wrote about below - it needs new kinds of tools, and free software is the best way to provide them.

Now, if only someone could do something similar in the UK....

Open Source Mathematics

This is incredibly important:

On 27 January 2009, one of us — Gowers — used his blog to announce an unusual experiment. The Polymath Project had a conventional scientific goal: to attack an unsolved problem in mathematics. But it also had the more ambitious goal of doing mathematical research in a new way. Inspired by open-source enterprises such as Linux and Wikipedia, it used blogs and a wiki to mediate a fully open collaboration. Anyone in the world could follow along and, if they wished, make a contribution. The blogs and wiki functioned as a collective short-term working memory, a conversational commons for the rapid-fire exchange and improvement of ideas.

The collaboration achieved far more than Gowers expected, and showcases what we think will be a powerful force in scientific discovery — the collaboration of many minds through the Internet.

You can read the details of what happened - and it's inspiring stuff - in the article. But as well as flagging up this important achievement, I wanted to point to some interesting points it makes:

The process raises questions about authorship: it is difficult to set a hard-and-fast bar for authorship without causing contention or discouraging participation. What credit should be given to contributors with just a single insightful contribution, or to a contributor who is prolific but not insightful? As a provisional solution, the project is signing papers with a group pseudonym, 'DHJ Polymath', and a link to the full working record. One advantage of Polymath-style collaborations is that because all contributions are out in the open, it is transparent what any given person contributed. If it is necessary to assess the achievements of a Polymath contributor, then this may be done primarily through letters of recommendation, as is done already in particle physics, where papers can have hundreds of authors.

The project also raises questions about preservation. The main working record of the Polymath Project is spread across two blogs and a wiki, leaving it vulnerable should any of those sites disappear. In 2007, the US Library of Congress implemented a programme to preserve blogs by people in the legal profession; a similar but broader programme is needed to preserve research blogs and wikis.

These two points are also relevant to free software and other open endeavours. So far, attribution hasn't really been a problem, since everyone who contributes is acknowledged - for example through the discussions around the code. Similarly, preservation is dealt with through the tools for source code management and the discussion lists. But there are crucial questions of long-term preservation - not least for historical purposes - which are not really being addressed, even by the longest-established open projects like GNU.

For example, when I wrote Rebel Code, I often found it hard to track down the original sources for early discussions. Some of them have probably gone for ever, which is tragic. Maybe more thought needs to be given - not least by central repositories and libraries - about how important intellectual moments that have been achieved collaboratively are preserved for posterity to look at and learn from.

Talking of which, the article quoted above has this to say on that subject:

The Polymath process could potentially be applied to even the biggest open problems, such as the million-dollar prize problems of the Clay Mathematics Institute in Cambridge, Massachusetts. Although the collaborative model might deter some people who hope to keep all the credit for themselves, others could see it as their best chance of being involved in the solution of a famous problem.

Outside mathematics, open-source approaches have only slowly been adopted by scientists. One area in which they are being used is synthetic biology. DNA for the design of living organisms is specified digitally and uploaded to an online repository such as the Massachusetts Institute of Technology Registry of Standard Biological Parts. Other groups may use those designs in their laboratories and, if they wish, contribute improved designs back to the registry. The registry contains more than 3,200 parts, deposited by more than 100 groups. Discoveries have led to many scientific papers, including a 2008 study showing that most parts are not primitive but rather build on simpler parts (J. Peccoud et al. PLoS ONE 3, e2671; 2008). Open-source biology and open-source mathematics thus both show how science can be done using a gradual aggregation of insights from people with diverse expertise.

Similar open-source techniques could be applied in fields such as theoretical physics and computer science, where the raw materials are informational and can be freely shared online. The application of open-source techniques to experimental work is more constrained, because control of experimental equipment is often difficult to share. But open sharing of experimental data does at least allow open data analysis. The widespread adoption of such open-source techniques will require significant cultural changes in science, as well as the development of new online tools. We believe that this will lead to the widespread use of mass collaboration in many fields of science, and that mass collaboration will extend the limits of human problem-solving ability.

What's exciting about this - aside from the prospect of openness spreading to all these other areas - is that there's a huge opportunity for the open source community to start, er, collaborating with the scientific one in producing these new kinds of tools that currently don't exist and are unlikely to be produced by conventional software houses (since spontaneously collaborative communities can't actually pay for anything). I can't wait.

Follow me @glynmoody on Twitter or identi.ca.

Gates Gives $300 million - but with a Catch

It's becoming increasingly evident that Bill Gates' philanthropy is not simple and disinterested, but has woven into it a complex agenda that has to do with his love of intellectual monopolies - and power. Here's the latest instalment:


The Bill and Melinda Gates Foundation, which is donating another $120 million to boosting agriculture in the developing world, will focus on self-help aid for poor farmers to sustain and grow production, a top adviser to the world's leading charitable foundation said.

Sounds good, no? Here are more details:

The Gates Foundation, with a $30 billion endowment to improve health and reduce poverty in developing countries, began investing in agricultural projects three years ago. The latest grants bring its farm sector awards to $1.4 billion.

One of its first investments was in African seeds through the Alliance for a Green Revolution in Africa (AGRA). The group is expected to introduce more than 1,000 new seed varieties of at least 10 crops to improve African production by 2016.

"Alliance for a Green Revolution in Africa" also sounds good; here's a little background on that organisation:

It has not gone unnoticed that AGRA falls under the direct supervision of the Global Development Program, whose senior programme officer is Dr. Robert Horsch, who worked for Monsanto for 25 years before he joined the Gates Foundation. Horsch was part of the scientific team in the company that developed Monsanto’s YieldGard, BollGard and RoundUp Ready technologies. Horsch’s task at the Gates Foundation is to apply biotechnology toward improving crop yields in regions including sub-Saharan Africa. Lutz Goedde another senior program officer of the Global Development Program, is also a recruit from the biotech industry as he used to head Alta Genetics, the world's largest privately owned cattle genetics improvement and artificial insemination Company, worth US$100 million.

That is, AGRA not only has close links with the Gates Foundation, but also with Monsanto - the Microsoft of the seed world.

If you read the rest of the document from which the above information was taken, you'll see that the AGRA programme is essentially promoting approaches using seeds that are genetically modified and patented. Here's the conclusion:

Sub-Saharan Africa represents an extremely lucrative market for seed companies. The development interventions by AGRA appear on the face of it, to benevolent. However, not only will AGRA facilitate the change to a market based agricultural sector in Africa replacing traditional agriculture, but it will also go a long way towards laying the groundwork for the entry of private fertilizer and agrochemical companies and seed companies, and more particularly, GM seed companies.

So Gates' donations are ultimately promoting an agriculture based on intellectual monopolies - just as Microsoft does in the software field. The latest $300 million doesn't sound quite so generous now, does it?

Follow me @glynmoody on Twitter or identi.ca.

14 October 2009

Who is La Rochefoucauld of Twitter?

Mozilla's Tristan Nitot has come up with a rather fine aphorism:

Twitter, c'est la version XXI°S des salons mondains, mais limitée à 140 caractères, et à l'échelle du globe.

So come on people, start polishing those tweets: somewhere out there is La Rochefoucauld of Twitter....

Follow me @glynmoody on Twitter or identi.ca.

12 October 2009

Windows Does Not Scale

Who's afraid of the data deluge?


Researchers and workers in fields as diverse as bio-technology, astronomy and computer science will soon find themselves overwhelmed with information. Better telescopes and genome sequencers are as much to blame for this data glut as are faster computers and bigger hard drives.

While consumers are just starting to comprehend the idea of buying external hard drives for the home capable of storing a terabyte of data, computer scientists need to grapple with data sets thousands of times as large and growing ever larger. (A single terabyte equals 1,000 gigabytes and could store about 1,000 copies of the Encyclopedia Britannica.)

The next generation of computer scientists has to think in terms of what could be described as Internet scale. Facebook, for example, uses more than 1 petabyte of storage space to manage its users’ 40 billion photos. (A petabyte is about 1,000 times as large as a terabyte, and could store about 500 billion pages of text.)

Certainly not GNU/Linux: the latest Top500 supercomputer rankings show that the GNU/Linux family has 88.60%. Windows? Glad you asked: 1%.

So, forget about whether there will ever be a Year of the GNU/Linux Desktop: the future is about massive data-crunchers, and there GNU/Linux already reigns supreme, and has done for years. It's Windows that's got problems....

Follow me @glynmoody on Twitter or identi.ca.

09 October 2009

Why Creativity Needs Shorter Copyright Terms

In response to a tweet of mine about shortening copyright to stimulate creativity, someone questioned the logic. It's an important point, so it seems useful to do some thinking out loud on the subject.

First, I should probably address the question of whether *longer* copyright stimulates creativity. The basic argument seems to be that longer copyright terms mean greater incentives, which means greater creativity. But does anyone seriously think about the fact that their creations will still be in copyright 69 years after their death? It won't do them any good, and probably won't do their descendants much good either, since the income at this point is generally close to zero.

Indeed, speaking as an author, I know that practically all my income from writing comes within the first couple of years; after that, it's dribs and drabs. If my copyright were cut down to even five years, it would make only a marginal difference to my total remuneration.

Now, clearly I'm not JK Rowling, but the point is, neither are 99.99999% of authors: I know from talking to other run-of-the mill writers that the same holds for them, too. So in practical terms, reducing the copyright term would have little effect on the money that most creators earned as result.

But let's look at the main part of my claim: that reducing copyright's term would encourage creativity. This is based on the rough syllogism that all artists draw on their predecessors in some way; making more prior creativity available would allow more artists to draw on it in more ways; and so this would increase overall creativity.

For the first assertion, look at history. Painters once began by mixing paints in another artist's studio, then drawing unimportant bits in his (usually his) works, learning how to emulate his style. Then they gradually painted more important bits in the style of that artist, often doing the low-cost jobs or rush jobs that he didn't have time or inclination to execute. Then, one day, that apprentice would set up on his (usually his) own, building on all the tricks and techniques he had learned from his master, but gradually evolving his own style.

Today, would-be artists tend not to become apprentices in the same way. Instead, they typically go to art school, where they learn to *copy* the masters in order to learn their techniques. Often you see them doing this in art galleries, as they strive to reproduce the exact same effect in their own copy. It teaches them the basics of painting that they can then build on in their own work.

In music, something very similar happens: journeyman composers write pieces in the style of the acknowledged masters, often copying their themes and structure very closely. This is true even for extreme geniuses. For example, in order to learn how to write in the new early classical style, the eight-year-old Mozart arranged three piano sonatas from J C Bach's Op. 5 as keyboard concertos.

Mozart also "borrowed" entire themes - most famously in the overture to The Magic Flute, where he takes a simple tune from a piano sonata by Clementi, and transforms it. Some composers did this on a regular basis. Handel, in particular, was quite unscrupulous in taking themes from fellow composers, and turning them into other, rather better, works. Moreover, the widely-used form of musical variations is based generally on taking a well-known theme and subjecting it to various transformations.

That was in the past, when art was an analogue artefact. Copying took place through trying to reproduce an artistic effect, or by borrowing musical themes etc. Today, in the digital age, copying is not such an incidental act, but central to how we use computers. When we access something online, we copy it to our computers (even audio streaming has to be assembled into copies of small chunks of sound before we can hear it).

Digital plasticity - the ability to compute with any content - makes the clumsy copying and learning processes of the past trivially easy. A child can take a digital image of a work of art and cut and paste elements of it into his or her own work; anyone can sample music, distort it and mix it with their own; texts can be excerpted and juxtaposed with others drawn from very diverse backgrounds to create mosaics of meaning.

All these amazingly rich and innovative things are now very easy to do practically, but the possibilities of doing so are stymied by laws that were drawn up for an analogue age. Those laws were not designed to forbid artists from learning from existing creations, but to stop booksellers producing unauthorised copies - a totally different issue. The idea of using just part of a work was not really a concern. But it is today, when the cut and paste metaphor is central to the digital world. That is why we need to reduce copyright to the bare minimum, so that the legal obstacles to creating in this new, inherently digital way, are removed.

If we don't, one of two things will happen. Either we will fail to realise the full creative potential of computing, or else the younger generation of artists will simply ignore the law. Either is clearly unsatisfactory. What is needed is a copyright regime that is balanced. That is far from being the case today. As the media industry (sic) ratchets up copyright terms again and again, creation has become subservient to the corporation, and the creators are cut off from their past - and hence future.

Follow me @glynmoody on Twitter or identi.ca.

07 October 2009

EU Consultation on Post-i2010 - Please Do It

Stupidly, I thought this EU consultation would be the usual clueless nonsense, unredeemable even by witty comments from people like me. I was wrong. It's actually an incredibly wide-ranging questionnaire about very important topics. Indeed, it's not even obvious to me what my "correct" answers should be - it actually makes you think.

Here's a small sample of the deep questions it wants us to consider:

The future of the sustained internet services growth - internet to drive innovation

Challenges and issues here include:

- Design and development of the future internet - semantic web, Internet of Things, scalability, mobility, security etc.

- Keeping the internet open to competition, innovation and user choice - issues here include: interoperability, keeping the internet and internet-based services open and a level playing field for innovation (end-to-end connectivity, service level agreements, cross-platform services, net neutrality and open business models), open standards, low barriers to entry, etc.

...

Promoting access to creativity at all levels

In terms of expectations, Internet users' and the creative content providing sector have never been as at odds as they are today. Creative industry players are struggling to find new viable business models that are able to ensure sufficient revenues for creators and to meet consumer expectations. The market for digital content is still fragmented and broadcasters and other content providers, together with end-users are prevented from benefiting from a true digital Single Market.

Participative platforms have grown as passive users (readers, viewers, consumers etc.) have become active producers (or "prosumers"). These users tend to ignore their statutory rights and their obligations towards rights holders for the content they transform or/and simply share in web 2.0 communities. Moreover, intermediaries generally impose take-it- or-leave-it complex standard terms of use to their users. Against this background, users currently do not enjoy a clear set of rights balancing the conditions set by rights holders (with DRMs [Digital Rights Management] and/or license agreements) and internet services or platforms imposing restrictive standard terms of use.

...

Openness as a global issue

The challenge is to keep the internet open, based on open platforms and open standards. Many issues can only be resolved through international cooperation. The ICT strategies in the EU have often been inward-looking, which is difficult to justify, given the globalisation of modern ICT and the internet.

...

Challenges of participatory web

The growth of the participatory web is adding new challenges and pressures on public administrations, as well as opportunities. Web 2.0 enables citizens to shift their relationship with government. There is increasing demand on administrations to become ever more transparent and open to citizen involvement both in the delivery of services and in the design of public policies. If managed correctly, these demands may lead to delivery of better, more personalised services at lower cost as well as more trust in the public administration. This also applies to key services such as health care and education, where practitioners and beneficiaries of the service alike can benefit from mutually enriching communities of interest.

This is all really important stuff; so if you are an EU citizen, please take part - you have until this Friday, 9 October. The good news is that you don't need to fill in the whole thing - you can just pick and choose the bits that matter to you. Usefully, you can download the questionnaire in a variety of languages before you fill it in online - I highly recommend doing so.

Follow me @glynmoody on Twitter or identi.ca.

Browser Ballot Screen: Time to Prepare

It looks like it's happening:


The European Commission will on 9 October 2009 formally invite comments from consumers, software companies, computer manufacturers and other interested parties on an improved proposal by Microsoft to give present and future users of the Windows PC operating system a greater choice of web browsers. The commitments have been offered by Microsoft after the Commission expressed specific concerns that Microsoft may have infringed EC Treaty rules on abuse of a dominant position (Article 82) by tying its web browser (Internet Explorer) to its client PC operating system Windows, and are an improved version of the proposals made by Microsoft in July 2009 (see MEMO/09/352 ). The improvements concern greater information to consumers about web browsers, the features of each browser, an improved user experience as well as a review by the Commission to ensure the proposals genuinely work to benefit consumers. Interested parties can submit comments within one month. The Commission welcomes Microsoft’s proposal as it has the potential to give European consumers real choice over how they access and use the internet. Following the market test, the Commission could decide to adopt a decision under Article 9 (1) of Regulation 1/2003, which would make the commitments legally binding on Microsoft.

It's hard to comment on this until we see what form the ballot screen will take, but I'm prepared to accept that this may be done in a fair manner. Assuming it is, what might the implications be?

Perhaps the most important one is that Firefox needs to be prepared for a massive onslaught when this goes live. I have heard the slightly tongue-in-cheek suggestion that Microsoft is hoping to bring Firefox's servers to their collective digital knees by allowing such a ballot screen; even assuming that's not the case, it's certainly true that Mozilla must start planning for the sudden peak in interest that is likely to follow the implementation of the ballot screen idea. It would be a terrible shame if people tried to download Firefox and failed because the Mozilla servers keel over.

Follow me @glynmoody on Twitter or identi.ca.

Meet Microsoft, the Delusional

This is hilarious:

Jean Philippe Courtois, president of Microsoft Europe, described the company as an underdog in Paris today.

He said Bing had between three and five percent market share in search and could only grow - although he admitted it could take a long time.

...

Despite Microsoft having to live with open source software for 10 years, it had retained its share in the market place, he said.

Er, what, like the browser sector, where Firefox now has nearly 24% market share worldwide, and Microsoft's share is decreasing? Or Apache's 54% in the Web server world, where Microsoft's share is decreasing? Or GNU/Linux's 88% market share of the top 500 supercomputers in the world, where Microsoft's share is static?

Microsoft the underdog? Or just a dog?

Follow me @glynmoody on Twitter or identi.ca.

Becta Says: Teach Us a Lesson...

...which is surely a offer we can't refuse.

For many years, Becta was one of the main obstacles to getting open source used within UK schools: it simply refused to budge from an almost pathological dependence on Microsoft and its products. Today, the situation is slowing improving, but it will take years to undo the harm caused by Becta's insistence on propagating the Microsoft monoculture in education.

At least Teach Us a Lesson seems to be starting off on the right foot:


Becta’s Teach us a Lesson competition launches today, Wednesday 7 October, following the speech that Kevin Brennan, the Minister for Further Education, made at the Learning Revolution Expo yesterday.

The competition seeks to find the brightest and best ideas for developing online resources for people to find informal learning opportunities that interest them. This will happen by having entries submitted to the competition website, where they will be commented on and rated by other site users.

This, then, is about opening up in terms of drawing on ideas outside Becta. More specifically:

There are some things we are trying to avoid:

* Using proprietary products which will not permit open sharing or which run counter to Government policy on open standards

At long last, Becta seems to have learned its lesson...

Follow me @glynmoody on Twitter or identi.ca.

06 October 2009

Postcodes: Royal Fail

Here's a perfect example of why intellectual commons should not be enclosed.

The UK Postcode data set is obviously crucial information for businesses and ordinary citizens - something that is clearly vital to the smooth running of everyday life. But more than that, it is geographic information that allows all kinds of innovative services to be provided by people with clever ideas and some skill.

That's exactly what happened when the Postcode database was leaked on to the Internet recently. People used that information to do all sorts of things that hadn't been done before, presumably because the company that claims to own this information, Royal Mail, was charging an exorbitant amount for access to it.

And then guess what happened? Yup, the nasties started arriving:

On Friday the 2nd October we received correspondence from the Royal Mail demanding that we close this site down (see below). One of the directors of Ernest Marples Postcodes Ltd has also been threatened personally.

We are not in a position to mount an effective legal challenge against the Royal Mail’s demands and therefore have closed the ErnestMarples.com API effective immediately.

We understand that this will cause harm and considerable inconvenience to the many people who are using or intend to use the API to power socially useful tools, such as HealthWhere, JobcentreProPlus.com and PlanningAlerts.com. For this, we apologise unreservedly.

Specifically, intellectual monopolies of a particularly stupid kind are involved:

Our client is the proprietor of extensive intellectual property rights in the Database, including copyright in both the Database and the software, and database rights.

Here's what Wikipedia has to say about these "database rights":

The Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases is a European Union directive in the field of copyright law, made under the internal market provisions of the Treaty of Rome. It harmonizes the treatment of databases under copyright law, and creates a new sui generis right for the creators of databases which do not qualify for copyright.

Before 1996, these sui generis "database rights" did not exist; they were created in the EU because lobbyists put forward the argument that they would offer an incentive to create more databases than the Americans, whose database publishers strangely didn't seem to need this new "right" to thrive, and so make the mighty EU even mightier - at least as far as those jolly exciting databases were concerned.

Rather wisely, afterwards the EU decided to do some research in this area, comparing their creation before and after the new sui generis right was brought in, to see just how great that incentive proved to be - a unique opportunity to test the theory that underpins intellectual monopolies. Here are the results of that research:

Introduced to stimulate the production of databases in Europe, the “sui generis”protection has had no proven impact on the production of databases.

According to the Gale Directory of Databases, the number of EU-based database “entries” was 3095 in 2004 as compared to 3092 in 1998 when the first Member States had implemented the “sui generis” protection into national laws.

It is noteworthy that the number of database “entries” dropped just as most of the EU-15 had implemented the Directive into national laws in 2001. In 2001, there were 4085 EU-based “entries” while in 2004 there were only 3095.

While the evidence taken from the GDD relies on the number of database “entries” and not on the overall turnover achieved or the information supplied by means of databases, they remain the only empirical data available.

So, the official EU study finds that the sui generis protection has had no proven impact on the production of databases; in fact, the number of databases went *down* after it was introduced.

Thus these "database rights" have been shown to stifle the production of databases - negating the whole claimed point of their introduction. Moreover, the Royal Mail's bullying of a couple of people who are trying to offer useful services that would not otherwise exist, shows the danger of entrusting such a critical data commons to commercial entities who then enclose it by claiming "database rights" in them: they will always be tempted to maximise their own profit, rather than the value to society as a whole.

Giving the Royal Mail a monopoly on this critical dataset - one that for all practical purposes can never be created again - is like giving a genetics company a monopoly on the human genome. That was attempted (hello, Celera) but, fortunately for us, thwarted, thanks largely to free software. Today, the human genome is an intellectual commons (well, most of it), and the Postcode data should be, too.

Follow me @glynmoody on Twitter or identi.ca.

Blogger's Massive Fail

I can't believe this. When posting the previous entry, I got this message:


Blogger currently allows a maximum of 10 labels per post, and 2000 labels per blog. To get rid of this message, you will have to correct the appropriate label counts.

It seems that I have exceeded my quota of 2000 labels per blog: how insane is that? How can I limit myself to a set number of labels given that the world moves on and new ideas come along that need new labels.

Time to explore those export options....

Open Source and the Fear of Failure

Yesterday I took part in an interesting event organised by BT called "Accelerating Enterprise adoption of Open Source Software" (disclaimer: filthy lucre was involved.) One topic that elicited much comment was why the public sector has singularly failed to deploy open source. As well as political issues (Tony Blair was and presumably still is manifestly in awe of (Sir) Bill Gates), there's another important issue to do with a fear of failure.

Nobody in government wants to take a chance on something new, so they stick with the old suppliers and the old solutions. When those (almost inevitably) fail, this causes people to be even more cautious, and so the vicious circle continues.

That's clearly bad news for open source, but here's a particularly good articulation of why the fear of failure is bad for governments more generally:

When I’ve spoken with government people, they confess a phobia of failure. Yet without the opportunity to fail, government – like industry and media – cannot experiment and thus innovate. We must give government the license to fail. That is difficult, especially because it is the citizenry that must grant that permission. I think government must begin to recast its relationship by opening up pilot procts to input and discussion, to smart ideas and improvements. I’m not suggesting for a second that every decision be turned into a vote, that law become a wiki. Government still exercises its responsibility. But it needs to use the new mechanisms of the web to hear those ideas. I would look for examples to Dell’s Ideastorm, Starbucks’ My Starbucks Idea, and Best Buy’s Idea Exchange.

Follow me @glynmoody on Twitter or identi.ca.

01 October 2009

Korea Cottons on to the Microsoft Monoculture

I've written several times about the extraordinary situation in South Korea - otherwise one of the most advanced technological nations - that maintains an almost total dependence on Microsoft's ActiveX technology for banking and government connections. Now it seems that the Koreans themselves are finally waking up to the disadvantages - and dangers - of that situation:

The bizarre coexistence of advanced hardware and an outdated user environment is a result of the country's overreliance on the technology of Microsoft, the U.S. software giant that owns the Korean computing experience like a fat kid does a cookie jar.

It is estimated that around 99 percent of Korean computers run on Microsoft's Windows operating system, and a similar rate of Internet users rely on the company's Internet Explorer (IE) Web browser to connect to cyberspace.

The article points out the obvious security issues with this approach:

This is a risky arrangement, since Active-X controls require full access to the Windows operating system and are often abused by cyber criminals who spread malicious programs to direct the browser to download files that compromise the user's control of the computer.

But it seems that the problem goes *much* deeper:

Even Microsoft seems ready to bail on Active-X, looking to phase out the program over security concerns and compatibility issues. However, in Korea, where most Web sites rely on Active-X to enable a variety of functions from online transactions to simple flash features, the program is abundant and critical as air.

This leads to awkwardness whenever Microsoft introduces a new product here. The release of Windows Vista caused massive disruption when Active-X used by banks and online shopping sites didn't function properly.

And the Korean Internet users sweated over Microsoft's initial plans to reduce its support for Active-X in IE8, the latest version of the company's Web browser. Although IE8 did end up backing Active-X, strengthened security features have made its use more complicated.

The reliance on Active-X has locked Korean computer users into a depressing cycle where they are prevented from venturing off to other operating systems and browsers, and stuck with an outdated technologies their creator can't wait to dispel.

That is, by instituting a monoculture, and becoming completely dependent not just on one manufacturer, but on one particular - and very unsatisfactory - technology used by that manufacturer, the Koreans find themselves trapped, left behind even by Microsoft, which wants to move on.

There could be no better demonstration of why mandating one proprietary technology in this way, rather than choosing an open standard with multiple implementations with the scope for future development, is folly.

Unfortunately, the article quoted above doesn't seem very optimistic on the chances of openness breaking out in South Korea any time soon, so it may well be that all its superb Internet infrastructure will go to waste as it remains locked into aging and increasingly obsolete technology on the software side. (Via Mozilla in Asia.)

Follow me @glynmoody on Twitter and identi.ca.

30 September 2009

What Light on Yonder gov.uk Site Breaks?

The first glint of hope for openness in the UK government begins to sparkle:


From today we are inviting developers to show government how to get the future public data site right - how to find and use public sector information.

The developer community through initiatives such as Show Us a Better Way, the Power of Information Taskforce, MySociety and Rewired State have consistently demonstrated their eagerness and abilities to "Code a Better Country". You have given us evidence and examples to help drive this forward within government.

We have an early preview of what the site could look like; we are now inviting interaction and comment from the developer community. With over 1000 existing data sets, from 7 departments (brought together in re-useable form for the first time) and community resources, we want developers to work with us to use the data to create great applications; give us feedback on the early operational community; and tell us how to develop what we have into a single point of access for government-held public data.

We know it is still work in progress, and there’s still a lot to do. That’s why we need you to help us get this right. Let us know what features or changes would make the site better for your and what other data sources you would like to see here.

Now there's an offer you can't refuse...get stuck in, people. (Via Glyn Wintle.)

Follow me @glynmoody on Twitter or identi.ca.

29 September 2009

Thanks for Keeping us in the Picture

Although e-petitions don't often accomplish much (the apology for Alan Turing being a notable exception), they do have the virtue of forcing the UK government to say something. In response to this:

“We the undersigned petition the Prime Minister to remove new restrictions on photography in public places.”

we got this:

It is a statutory defence for a person to prove that they had a reasonable excuse for eliciting, publishing or communicating the relevant information. Legitimate journalistic activity (such as covering a demonstration for a newspaper) is likely to constitute such an excuse. Similarly, an innocent tourist or other sight-seer taking a photograph of a police officer is likely to have a reasonable excuse.

Since most people can't *prove* they had reasonable excuse for taking a photo - is "because it was a nice shot" *reasonable*? And how do you *prove* it was reasonable at the time? - this very high legal bar obviously implies that non-journalistic Brits had better not take any snaps of Plod because, otherwise, you're nicked.

Follow me @glynmoody on Twitter or identi.ca.

26 September 2009

Freedom is Slavery, Slavery is Freedom

The Competitive Enterprise Institute is always good for a laugh thanks to its transparent agenda (the use of the weasel word "competitive" gives it away), and it doesn't disappoint in the following, which is about the evils of net neutrality and openness:

Consider the Apple iPhone. The remarkably successful smartphone has arguably been a game-changer in the wireless world, having sold tens of millions of handsets since its 2007 launch and spurring dozens of would-be “iPhone killers” in the process. If you listen to net neutrality advocates’ mantra, you would assume the iPhone must be a wide open device with next to no restrictions. You would be mistaken. In fact, the iPhone is a prototypical “walled garden.” Apple vets every single iPhone app, and Apple reserves the right to reject iPhone apps if they “duplicate [iPhone] functionality” or “create significant network congestion.”

Why, then, has the iPhone enjoyed such popularity? It’s because consumer preferences are diverse and constantly evolving. Most users, it seems, do not place openness on the same pedestal that net neutrality advocates do. Proprietary platforms like the iPhone have advantages of their own– a cohesive, centrally-managed user experience, for one– but have disadvantages as well.

Which is fair enough. But it then goes on to say:

But under the FCC’s proposed neutrality rules, the iPhone and similar devices that place limits on the content and applications that users can access would likely be against the law.

Net neutrality has nothing to do with the edges - which is where the iPhone resides - and everything about the wiring that connects the edges. It is about preventing those who control the networks from blocking innovative services - like the iPhone - being offered across them. It would only apply if Apple owned the network and refused to allow third parties to offer rival services to its iPhone - clearly not the case. It does not forbid Apple from choosing which apps to run on the iPhone, any more than it forces Microsoft to go open source.

Painting the freedom of net neutrality as a kind of slavery in this way is really a tour-de-force of topsy-turvism, even by the high standards of the Competitive Enterprise Institute.

Follow me @glynmoody on Twitter or identi.ca.

25 September 2009

Won't Someone Please Think of the, er, Plants?

I've tweeted this, but it's so good, I just have to blog it too:

CO2 is not a pollutant. CO2 makes Earth green because it supports all plant life. It is Earth's greatest airborne fertilizer. Even man-made CO2 contributes to plant growth that in turn sustains humanity and ecosystems.

CO2 Is Green is working to insure that all federal laws or regulations are founded upon science and not politics or scientific myths. No one wants the plant and animal kingdoms, including humanity, to be harmed if atmospheric CO2 is reduced. The current dialog in Washington needs to reflect these inalterable facts of nature. We cannot afford to make mistakes that would actually harm both the plant and animal kingdoms.

Oh lordy, those poor little plants and animals - deprived of the life-giving CO2. How could mankind be so cruel and insensate? How could we have overlooked such an obvious thing until now?

Update: Don't miss Adam Pope's super-sleuthing in the comments that suggests this site just might have something to do with the gas and oil industries...

Follow me @glynmoody on Twitter or identi.ca.

24 September 2009

More Evil from the Intellectual Monopolies Mob

One of the best windows into the otherwise dark and murky world of backroom deals among proponents of intellectual monopolies can be found in the reports on the U.S.-EU IPR Enforcement Working Group (doesn't that word "enforcement" really say it all?). Here are a couple of the highlights of the latest one:

The U.S. and EU both expressed a desire to engage labor movements in delivering a “positive and constructive message” about IPR protection and enforcement. The RIAA (Recording Industry Association of America) and IIPA (International Intellectual Property Association) were both very enthusiastic about this proposal.

Basically, the IM mob are desperately trying to con unions into doing their dirty work by pushing out propaganda on intellectual monopolies. I just love the line "The RIAA (Recording Industry Association of America) and IIPA (International Intellectual Property Association) were both very enthusiastic about this proposal": you bet they are. Their own ham-fisted efforts have backfired so spectacularly that they are desperate for someone else not tainted by their inept approach of punishing consumers to try.

The following is also significant:

The discussion on future work mostly focus on climate change. General Electric and Microsoft were particularly outspoken in highlighting their fear that some current negotiations over green technology and IPR would weaken IPR. They also denounced the inclusion of proposals that limit patentable subject matter and recommend compulsory licenses or licenses of rights.

As well as Microsoft's usual bleating about not being allowed to patent software in some jurisdictions, it's interesting to note that both it and General Electric seem to rate the preservation of intellectual monopolies rather higher than the preservation of our planet. Pure evil. (Via Ray Corrigan.)

Follow me @glynmoody on Twitter or identi.ca.

Cracks in the ACTA Wall of Secrecy

I've lamented many times the totally unjustified secrecy of the ACTA negotiations: these affect billions of people who have a right to know what their elected representatives are up to before this stuff is simply imposed on us. Hitherto, there's been no suggestion of any dissension within the ACTA ranks; so this comment in a blog post from Jamie Love about a lunch meeting of civil society NGOs held by the UK's Intellectual Property Office during the WIPO meeting is intriguing:


The UK IP office said it had complained frequently of the secrecy of the ACTA negotiations.

Perhaps if we can get a few more of the insiders moaning about this unnecessary lack of transparency, things will finally start moving.

Follow me @glynmoody on Twitter or identi.ca.

23 September 2009

Big Win for GNU GPL in France

One of the fallback positions for purveyors of FUD is that the GNU GPL may not be valid, because it hasn't been properly tested in court. That's getting increasingly implausible as a stance. After being upheld in Germany a few times, here's a big decision in its favour in France:

In a landmark ruling that will set legal precedent, the Paris Court of Appeals decided last week that the company Edu4 violated the terms of the GNU General Public License (GPL) when it distributed binary copies of the remote desktop access software VNC but denied users access to its corresponding source code. The suit was filed by Association pour la formation professionnelle des adultes (AFPA), a French education organization.

...

The events of the case go back to early 2000, when Edu4 was hired to provide new computer equipment in AFPA's classrooms. Shortly thereafter, AFPA discovered that VNC was distributed with this equipment. Despite repeated requests, with mediation from the Free Software Foundation France, Edu4 refused to provide AFPA with the source code to this version of VNC. Furthermore, FSF France later discovered that Edu4 had removed copyright and license notices in the software. All of these activities violate the terms of the GNU GPL. AFPA filed suit in 2002 to protect its rights and obtain the source code.

There are a couple of important points about this decision. The first is noted in the post quoted above:

"what makes this ruling unique is the fact that the suit was filed by a user of the software, instead of a copyright holder. It's a commonly held belief that only the copyright holder of a work can enforce the license's terms - but that's not true in France. People who received software under the GNU GPL can also request compliance, since the license grants them rights from the authors."

The other point flows from this. The French legal system has many novel aspects, so it's important that the GNU GPL was upheld here, just as it was in Germany. It means that not only is the approach that the GPL takes being upheld by courts, it is being upheld in courts that look at things from different legal perspectives. That augurs well for future rulings in other jurisdictions.

Follow me @glynmoody on Twitter or identi.ca.

21 September 2009

Microsoft, Monsanto and Intellectual Monopolies

Here's a brilliant, must-read feature exposing some of the hidden agendas of the Green Revolution and the dark side of the Gates Foundation's work in Africa. In particular, it makes explicit the symmetry of Microsoft and Monsanto in their use of intellectual monopolies to make their users increasingly powerless:

The preference for private sector contributions to agriculture shapes the Gates Foundation's funding priorities. In a number of grants, for instance, one corporation appears repeatedly--Monsanto. To some extent, this simply reflects Monsanto's domination of industrial agricultural research. There are, however, notable synergies between Gates and Monsanto: both are corporate titans that have made millions through technology, in particular through the aggressive defense of proprietary intellectual property. Both organizations are suffused by a culture of expertise, and there's some overlap between them. Robert Horsch, a former senior vice president at Monsanto, is, for instance, now interim director of Gates's agricultural development program and head of the science and technology team. Travis English and Paige Miller, researchers with the Seattle-based Community Alliance for Global Justice, have uncovered some striking trends in Gates Foundation funding. By following the money, English told us that "AGRA used funds from the Bill and Melinda Gates Foundation to write twenty-three grants for projects in Kenya. Twelve of those recipients are involved in research in genetically modified agriculture, development or advocacy. About 79 percent of funding in Kenya involves biotech in one way or another." And, English says, "so far, we have found over $100 million in grants to organizations connected to Monsanto."

This isn't surprising in light of the fact that Monsanto and Gates both embrace a model of agriculture that sees farmers suffering a deficit of knowledge--in which seeds, like little tiny beads of software, can be programmed to transmit that knowledge for commercial purposes. This assumes that Green Revolution technologies--including those that substitute for farmers' knowledge--are not only desirable but neutral. Knowledge is never neutral, however: it inevitably carries and influences relations of power.

I fear that with hindsight we will see that contrary to the almost universal view that Gates is redeeming his bad boy years at Microsoft with the good boy promises of his Foundation, Gates will actually do even more damage in the realm of agriculture than he has in the world of computing. (Via Roy Schestowitz.)

Follow me @glynmoody on Twitter or identi.ca.

On the Road to Mendeley

Vic Keegan had an interesting article in the Guardian last week about a new site mendeley.com:


The music radio site Last.fm is one of the great ideas from the UK during the first dotcom boom. Users can listen to their own songs and other tracks recommended by Last.fm's algorithms based on their tastes, including iTunes, and those of friends. It could easily have been a one-trick pony. But now a few academics have applied its serendipity to scientific research. Why can't researchers, instead of waiting anywhere up to three years for their papers to jump all the hurdles, be part of a real-time market place – a fusion of iTunes and Last.fm for science? They pitched the idea, among others, to two of Last.fm's investors: Spencer Hyman and Stefan Glaenzer, newly enriched by the sale of Last.fm to CBS. They bought into the idea of using the site's principles to aggregate users' data (anonymously) while building up a databank of articles. Now the show is on the road and expanding fast. It is free, but a premium version will be added soon.

What's particularly fascinating is to see the cross-over of ideas from arts to science, and that both are driven by the insight that sharing with others brings huge benefits to them and to you.

Even though it's not open source, it's good to see that from the start there's a GNU/Linux version of the Mendeley client. Since the power of the site comes from the network effects of sharing, not the secret sauce hidden in the code, there doesn't seem to be any reason why that code shouldn't be opened up, and plenty of benefits in doing so. Now that Mendeley has started on its journey of sharing, let's hope they go the whole way.

Follow me @glynmoody on Twitter or identi.ca.

17 September 2009

Analogue or Digital? - Both, Please

Recently, I bought the complete works of Brahms. Of course, I was faced with the by-now common problem of whether to buy nostalgic CDs, or evanescent MP3s. The price was about the same, so there was no guidance there. Of course, ecologically, I should have gone for the downloads, but in the end I choose the CDs - partly for the liner stuff you never get with an MP3, and partly because I have the option of degrading the CD bits to lossy MP3, which doesn't work so well the other way.

So imagine my surprise - and delight - when I discovered after paying for said CDs that the company - Deutsche Grammophon - had also given me access to most of the CDs as streams from its Web site, for no extra cost (I imagine the same would have been true of the MP3s). This was a shrewd move because (a) it made me feel good about the company, even though it cost them very little, and (b) I'm now telling people about this fact, which is great publicity for them.

But maybe my delight is actually a symptom of something deeper: that having access to both analogue and digital instantiations of information is getting the best of both worlds.

This struck me when I read the following story:

Google will make some 2 million out-of-copyright books that it has digitally scanned available for on-demand printing in a deal announced Thursday. The deal with On Demand Books, a private New York-based company, lets consumers print a book in about 10 minutes, and any title will cost around $8.

The books are part of a 10 million title corpus of texts that Google ( GOOG - news - people ) has scanned from libraries in the U.S. and Europe. The books were published before 1923, and therefore do not fall under the copyright dispute that pits Google against interests in technology, publishing and the public sector that oppose the company's plans to allow access to the full corpus.

That in itself, is intriguing: Google getting into analogue goods? But the real importance of this move is hinted at in the following:

On Demand already has 1.6 million titles available for print, but the Google books are likely to be more popular, as they can be searched for and examined through Google's popular engine.

That's true, but not really the key point, which is that as well as being able to search *for* books, you can search *through* them. That is, Google is giving you an online search capability for the physical books you buy from them.

This is a huge breakthrough. At the moment, you have to choose between the pleasure of reading an analogue artefact, and the convenience of its digital equivalent. With this new scheme, Google will let you find a particular phrase - or even word - in the book you have in your hands, because the latter is a physical embodiment of the one you use on the screen to search through its text.

The trouble is, of course, that this amazing facility is only available for those books out of copyright that Google has scanned. Which gives us yet another reason for repealing the extraordinarily stupid copyright laws that stop this kind of powerful service being offered for *all* text.

Follow me @glynmoody on Twitter or identi.ca.

16 September 2009

From the GNU GPL to GISAID's EpiFlu

A few months ago, I wrote about GISAID, which takes a rather interesting and - to readers of this blog, at least - familiar approach to sharing genomic data:

Registered users can upload data relating to sequences, clinical manifestations in humans, epidemiology, observations in poultry and other animals, etc. These data will be accessible to all other registered users, but not to others unless they have agreed to the same terms of use. This maintains confidentiality of the data.

This is, of course, the same as the GNU GPL: do as you would be done by - if you want to use the GPL'd code, you can, but you must share with everyone the results of your work if you decide to share it with anyone.

The GNU GPL was radical in its time, and the GISAID approach with its EpiFlu database, containing flu virus sequences, is also challenging - and meeting its own obstacles:

Today, the GISAID database (which is called EpiFlu) features both genomic and epidemiological data on tens of thousands of virus samples. At least until recently, the project seemed to be working. During the H1N1 outbreak, so many sequences were being submitted so quickly that researchers were literally watching clusters of outbreaks in real time.

Then, in July of 2009, the Swiss Institute of Bioinformatics (SIB) in Geneva, which has managed the database since 2006, removed EpiFlu from the GISAID Web site, making it available only to users redirected to SIB's Web site. SIB claims that GISAID had breached contract by failing to pay its bills on time, thereby relinquishing its rights to the database.

Let's hope that the SIB comes to its senses before it loses more of its credibility as a modern scientific organisation. Its high-handed claiming of "rights" to a commons created by others is simply not acceptable in the 21st century - which, if it has a future, will be one based around precisely the kind of sharing practised by GISAID.

Follow me @glynmoody on Twitter or identi.ca.

15 September 2009

Nonplussed by Non-Commercial

One of the vexed issues in the world of Creative Commons licensing is what, exactly, is meant by "non-commercial" use. In an attempt to clarify things, the Creative Commons people have commissioned a study, which has now appeared. Here are some of the highlights according to the press release:

Creative Commons noncommercial licenses preclude use of a work “in any manner that is primarily intended for or directed toward commercial advantage or private monetary compensation.” The majority of respondents (87% of creators, 85% of users) replied that the definition was “essentially the same as” (43% of creators, 42% of users) or “different from but still compatible with” (44% of creators, 43% of users) theirs. Only 7% of creators and 11% of users replied that the term was “different from and incompatible with” their definition.

Other highlights from the study include the rating by content creators and users of different uses of online content as either “commercial” or “noncommercial” on a scale of 1-100, where 1 is “definitely noncommercial” and 100 is “definitely commercial.” On this scale, creators and users (84.6 and 82.6, respectively) both rate uses in connection with online advertising generally as “commercial.” However, more specific use cases revealed that many interpretations are fact-specific. For example, creators and users gave the specific use case “not-for-profit organization uses work on its site, organization makes enough money from ads to cover hosting costs” ratings of 59.2 and 71.7, respectively.

On the same scale, creators and users (89.4 and 91.7, respectively) both rate uses in which money is made as being commercial, yet again those ratings are lower in use cases specifying cost recovery or use by not-for-profits. Finally, both groups rate “personal or private” use as noncommercial, though creators did so less strongly than users (24.3 and 16.0, respectively, on the same scale).

In open access polls, CC’s global network of “friends and family” rate some uses differently from the U.S. online population—although direct empirical comparisons may not be drawn from these data. For example, creators and users in these polls rate uses by not-for-profit organizations with advertisements as a means of cost recovery at 35.7 and 40.3, respectively—somewhat more noncommercial. They also rate “personal or private” use as strongly noncommercial—8.2 and 7.8, respectively—again on a scale of 1-100 where 1 is “definitely noncommercial” and 100 is “definitely commercial.”

I hope you got all that, for I certainly didn't. All that comes across to me from these figures is that "non-commercial" is so fluid a concept as to be useless.

The Creative Commons people rather created a rod for their own backs when they allowed this particular licence, which was bound to problematic. Indeed, it's striking that the GNU GPL, which doesn't allow this restriction, avoids all these issues entirely. Probably too late now to do anything about it...other than commissioning surveys, of course.

Follow me @glynmoody on Twitter or identi.ca.

14 September 2009

Wikipedia + Flickr = Fotopedia

I am a huge fan of Wikipedia, one of the greatest achievements of sharing; I also enjoy wandering around Flickr, although its lack of over-arching organisation makes that hard to do. Maybe this is perfect solution: Fotopedia, "the first collaborative photo encyclopedia", which uses text from Wikipedia, but only to provide what amount to extended captions for the pix, which are generally very attractive.

It's not the first to do this - VisWiki has been around for some time - but Fotopedia seems to take a much more visual approach, which I find very pleasing, because it creates more than just a highly-illustrated version of Wikipedia. Articles and their pix are a little thin on the ground at the moment, but with any luck, that won't be the case for long once word gets out - and pictures start pouring in.

Follow me @glynmoody on Twitter or identi.ca.

MakeHuman Makes Open Source More Human

One of the canards about open source is that it only produces hardcore hacker programs - dev tools, infrastructural stuff etc. - that have little to offer the general, non-technical, *normal* user. While that may have been true ten years ago, things have moved on.

For example, here's MakeHuman, an amazing program that lets you create photorealistic 3D humanoid characters:

MakeHuman is an open source (so it's completely free), innovative and professional software for the modelling of 3-Dimensional humanoid characters. Features that make this software unique include a new, highly intuitive GUI and a high quality mesh, optimized to work in subdivision surface mode (for example, Zbrush). Using MakeHuman, a photorealistic character can be modeled in less than 2 minutes; MakeHuman is released under an Open Source Licence (GPL3.0) , and is available for Windows, Mac OS X and Linux.

The MakeHuman 0.9.1 Release Candidate was published in December 2007, prompting considerable community feedback.

Development effort is currently focused on the 1.0 Release, based on a new GUI and a 4th generation mesh. This release also incorporates considerable changes to the code base which now uses a small, efficient core application written in C, with most of the user functionality being implemented in Python. Because Python is an interpreted scripting language, this means that a wide range of scripts, plugins and utilities can be added without needing to rebuild the application.

Even the alpha version is incredibly impressive - real drag and drop 3D humanoid manipulation (*very* eerie), with a simple-to-use interface. If you think that free software is only about important but boring stuff, try out MakeHuman, and be amazed.

Follow me @glynmoody on Twitter or identi.ca.

Checks Are Indeed Needed - on Reality

Here's an unbelievably shameless attempt by Sir Roger Singleton to shout down the justified concern in the face of the insane UK government vetting scheme, which he heads. Let's consider some of his comments.


It is not about interfering with the sensible arrangements which parents make with each other to take their children to schools and clubs.

Well, except for the fact that if I regularly take other people's children to a club, I have to register. So Sir Roger seems to be re-defining "sensible" to exclude this hitherto mundane activity.

It is not about subjecting a quarter of the population to intensive scrutiny of their personal lives

No, it's worse: it's allowing a quarter of the population to be at the mercy of *unsubstantiated* rumours, without any controls on calumnies, however misinformed, that may be circulating about them.

it is not about creating mistrust between adults and children

Er, apart from the fact that the line now being pushed by proponents of the scheme is that if you don't register you clearly have something to hide, and cannot therefore be trusted with children. Which means that children are now expected to distrust everyone who has not been vetted - a mere three-quarters of the population.

it is not about ... discouraging volunteering

Well, Sir Roger, I agree it's not *about* discouraging volunteering - this is about instilling yet more fear to make people more sheep-like and compliant - but it will certainly be the inevitable knock-on consequence. I, for one, will not be volunteering for anything in future, because I refuse to allow a faceless and largely unaccountable bureaucracy - one that has time and again proven itself to be utterly incompetent with sensitive, personal information - to make judgments about my trustworthiness.

So, all-in-all, your statements are a total disgrace, because you simply dismiss all the deeply-felt concerns of parents up and down the country without addressing them in the slightest. You have simply re-stated your own indifference to what the public thinks - a public you are supposed to serve.

If you had any decency, you would resign; but then, if you had any decency you wouldn't be running this divisive, authoritarian scheme that will continue to blight families, education and British society in general until such time as it is consigned to the political dustbin, which can't be soon enough.

Follow me @glynmoody on Twitter or identi.ca.

12 September 2009

On Opening Up with PHP

PHP is one of the big success stories of open source, so it's great to read this interview with its creator, Rasmus Lerdorf. I was especially struck by these words of wisdom:

in 1997, it basically came to the point where I was going to kill the project, because it was growing so fast and my mailbox was filling up with suggestions, complaints, patches, all these things. Up until then, I had been doing everything myself. Someone would make a suggestion, send me a patch and I'd rewrite the patch the way I thought it should be done.

I would disagree with people, I'd argue back and forth, and I just couldn't keep up any more. I was getting frustrated and sick of it all, [thinking]: "Why are all these people expecting me to fix their code? They're not paying me. What the hell am I doing working my ass off for these folks? I don't even know them – what the hell is going on here?"

So that was the time when I said: "This has to change. Give the guys who have been complaining over the last few years access to the code. The guy who has been complaining about the Oracle extension, he's been a pain in my ass for years, so it's yours now buddy. Any further issues or complaints about Oracle go directly to you." And that really empowered people.

When they felt that they now owned a slice of PHP, they started getting defensive. Instead of complaining to me about things – once they got ownership, and power, the whole atmosphere changed. And it got a lot more fun as well, because I didn't feel like it was just me against the world any more; now it was a real team effort.

That, in essence, is the secret of free software. Putting it into practice is slightly harder...

Follow me @glynmoody on Twitter or identi.ca.

Time for MPs to Face the Music on Sharing

Another ill-informed opinion piece from a politician about file-sharing:


Platinum selling artists Radiohead and Pink Floyd have said they are happy to see their music used as a sort of digital loss leader to sell other products, but these groups are the exception rather than the rule. The average musician earns less than £15,000 a year and losing royalties makes the day-to-day struggle even harder for them.

Those average musicians - just like average authors - will tell you the biggest problem they face is getting known, not getting paid. What musicians, and authors like me, struggle with is to get the word out about our stuff amongst the million other offerings out there. Believe it or not, simply having a distributor does not solve that problem: in my experience they pretty much expect *you* to do the marketing.

Paradoxical as it may seem, giving your stuff away is one of the best ways to make money. Not necessarily from the content - although that is possible, too, for example by selling physical CDs/books to people who have digital versions - but from ancillary revenue. This is not to be sneezed at: *all* the top pop musicians make much more from their live appearances than they do from their CDs (which is why an artist like Prince *gives away* CDs to people who attend his concerts).

As the quotation above concedes, giving away stuff isn't a difficulty for the top artists, and as I've indicated, giving it away is precisely the best way for less well-known musicians to break out of their low-income ghetto.

So, really, the only people who lose out from the sharing of music online are the record companies, who find themselves without a role. But the idea that civil liberties should be curtailed simply to keep afloat a dying - and widely-hated, both by artists and consumers - industry, should be self-evidently absurd.

It's worrying that the author of this latest simplistic attack on file-sharing, apparently "a former member of Runrig", is unable to see this. He and other demagogues that attack sharing for whatever reason would do well to look at the facts, and not glibly regurgitate the propaganda of the industry and its lobbyists.

Follow me @glynmoody on Twitter or identi.ca.

Russia's New Holiday: Programmer's Day

Russia's President Medvedev has decreed a new holiday for the country:

Президент России Дмитрий Медведев своим указом установил профессиональный праздник - День программиста, который отмечается 13 сентября, если год високосный - 12 сентября, сообщает пресс-служба главы государства.

Неофициально День программиста отмечается в мире уже много лет на 256-й день каждого года. Число 256 выбрано потому, что это количество целых чисел, которое можно выразить с помощью одного восьмиразрядного байта и также это максимальная степень числа 2, которая меньше 365.

13 сентября уже давно стало неофициальной праздничной датой для программистов, напомнили в министерстве. Указ об официальном утверждении праздника был подготовлен Минкомсвязи после консультаций с профсоюзами и отраслевыми ассоциациями и внесен в правительство в июле 2009 года.

[Via Google Translate: Russia's President Dmitry Medvedev issued a decree established a professional holiday - Day of the programmer, which is celebrated on Sept. 13 if a leap year - September 12, the press office of head of state.

Unofficially Programmer's Day celebrated in the world for many years at the 256 th day of each year. The number 256 is chosen because it is the number of integers that can be expressed using a single eight-byte, and also is the maximum degree of 2, which is less than 365.

September 13 has long been an informal celebratory date for programmers, recalled in the ministry. Ordinance approval Minkomsvyazi feast was prepared after consultation with trade unions and industry associations and submitted to the Government in July 2009.]

Russia leads the way again....

Follow me @glynmoody on Twitter or identi.ca.

11 September 2009

Governments Have Political Agendas? Surely Not

This interview about the EU's intervention on the Oracle-Sun deal made me chortle:


Q: What is the motivation for the EC itself?

Weiss: We have a pretty common position in Gartner that there is either a misunderstanding or lack of knowledge on the part of the EC where it feels open source can be used as a competitive threat in the market. ... That commission is there to protect the European vendors and opportunities for European common market members. There are vendors with databases that would find Oracle an intimidating presence and may be competing with Oracle not only on the database level but also on the applications level.

Feinberg: It's a political agenda. And although it's pretty strong, for a lack of better term it is the re-emergence of protectionism by a governing body of some organization. The EU is looking for how it can protect the companies in Europe.

I see, so what they're saying is that the EU has a political agenda, and is trying to protect companies in Europe. And this would be different from what the US does, or Japan, or China, exactly *how*....?

Follow me @glynmoody on Twitter or identi.ca.

Why Gordon Brown is Not Turing Complete

Thousands of people have come together to demand justice for Alan Turing and recognition of the appalling way he was treated. While Turing was dealt with under the law of the time and we can’t put the clock back, his treatment was of course utterly unfair and I am pleased to have the chance to say how deeply sorry I and we all are for what happened to him.

Kudos to Gordon Brown for at last apologising to the memory of this poor man. Or at least partial kudos, since he doesn't quite seem to have taken those words fully to heart.

If we wish to render some justice to Turing, there would be no better way than to ensure the preservation of Bletchley Park, perhaps the central theatre of his work, as a monument to him, and to the thousands of others involved in the early years of code-breaking and computing in this country. If Gordon Brown is sincere in his apology, and these are to be more than a politician's easy words, he should make that happen now.

Follow me @glynmoody on Twitter or identi.ca.

10 September 2009

Marriage Made in Hell: FOI+DRM

This is not as it's supposed to be:

Secretive management at Southampton University are undermining the spirit of Freedom of Information (FOI) laws, if not necessarily contravening the letter.

Reg reader and founder of website publicwhip, Francis Irving, draws our attention to a fairly innocuous request to the University requesting details of the "total amount received by the purchase of printer credit at the University of Southampton for the academic years 2006/7, 2007/08 and 2008/09".

Hardly earth-shattering information. The author of the request, Adam Richardson, was therefore very surprised to receive back from the University – about a month later than the law suggests – a copy-protected PDF document, which requires the recipient to swear that they are indeed the person who made the FOI request before viewing the rest of the document.

...

In addition to these technological barriers, the University also adds an "Intellectual Property Rights Notice" which would appear to be a direct contravention of the law in this area. They claim to "reserve all intellectual property rights" in respect of material provided under the FOI request. The material may not be further used without the "written permission of the University".

They're claiming what? - intellectual monopolies on the facts? Talk about being true to the spirit of the law....

Follow me @glynmoody on Twitter or identi.ca.

BBC Worldwide Merging with Microsoft?

Against this background:

BBC Worldwide’s digital sales and business development head Peter Mercier is leaving to be Microsoft’s content acquisitions and strategy senior director - the latest in the revolving HR door between the two companies.

...

BBCWW hired Mercier from MobiTV as head of mobile in 2007 before he got a wider digital role in ‘08. Ashley Highfield left as CEO of BBCWW’s Kangaroo JV last year. Microsoft’s UK online services group VP Chris Dobson went the other way to be BBCWW’s WVP and GM of global ad sales, leading BBC.com ad sales in particular; he later took two BBCWWers with him.

Rather than try to cover up the symbiotic relationship between the two organisations, wouldn't it just be simpler if they merged them together now? At least then there wouldn't be any pretensions of independence by the BBC Worldwide...

08 September 2009

The Open Dinosaur Project

Now this is what I call open science:

Hello, and thanks for dropping by at the Open Dinosaur Project. This blog is part of a wider project, in which we hope — with your help — to make some science. We want to put together a paper on the multiple independent transitions from bipedality to quadrupedality in ornithischians, and we want to involve everyone who’s interested in helping out. We’ll get to the details later, but the basic idea is to amass a huge database of measurements of the limb bones of ornithischian dinosaurs, to which we can apply various statistical techniques. Hopefully we’ll figure out how these transitions happened — for example, whether ceratopsians, thyreophorans and ornithopods all made it in the same way or differently.

Who are “we”, I hear you ask. The core ODP team is Andy Farke (curator at the Raymond M. Alf Museum of Paleontology, Claremont, California), Matt Wedel (Western University of Health Sciences, Pomona, California) and Mike Taylor (University College London). We’re all researching and publishing scientists, specialising in dinosaurs — although up until now Matt and Mike have concentrated on sauropods.

As for who you are: if you care about dinosaurs, and want to make some science, then you can be involved. It doesn’t matter whether you’re a seasoned professional palaeontologist, a high-school kid or a retired used-car salesman: so long as you can conduct yourself like a professional, you’re welcome here.

And beyond this great idea, there's lots of practical stuff on the site about how it will done: this will clearly carry over to other projects, which makes it well worth studying for those contemplating similar collaborative open science projects. (Via @BoraZ.)

Follow me @glynmoody on Twitter or identi.ca.

Intellectual (Monopolies) Ventures

I've been avoiding this story about nathan Myhrvold's Intellectual Ventures using patent trolls as proxies because it's just beyond even the deepest irony, and thus immune to my pen. But then I came across this post that puts it so well, and I felt had to pass it on:


Until recently, one of the few points Myhrvold could make in his own favor is that he hadn’t started suing firms that declined to license his patent portfolio. I say “until recently” because we’re now learning that the lawsuits have started. IV has begun selling off chunks of its patent portfolio to people like Raymond Niro with well-deserved reputations for being “patent trolls.” Threatening to sell patents to a third party who will sue you is more subtle than threatening to sue you directly, but the threat is just as potent. Myhrvold’s “sales pitch” to prospective licensees just got a lot more convincing.

The fundamental question we should be asking about this business strategy is how it benefits anyone other than Myhrvold and the patent bar. Remember that the standard policy argument for patents is that they incentivize beneficial research and development. Yet IV’s business model is based on the opposite premise: produce no innovative products, spend minimal amounts on research and development, and make a profit by compelling firms that are producing products and investing in R&D to pay up. Not only does this enrich Myhrvold at everyone else’s expense, but it also reduces the incentive to innovate, because anyone who produces an innovative product is forced to share his profits with Intellectual Ventures. Patents are supposed to make innovation more profitable. Myhrvold is using the patent system in a way that does just the opposite. In thinking about how to reform the patent system, a good yardstick would be to look for policy changes that would tend to put Myhrvold and his firm out of business.

That last line is an absolute killer.

Follow me @glynmoody on Twitter or identi.ca.

Someone Has a Man with a Red Flag Moment

This is so misguided:

Digital personal property (DPP) is an attempt to make consumers treat digital media like physical objects. For instance, you might loan your car to a friend, a family member, or a neighbor. You might do so on many different occasions and for different lengths of time. But you are unlikely to leave the car out front of your house with the keys in it and a sign on it saying, "Take me!" If you did, you might never see the vehicle again.

But that's the whole point about digital content: you *can* leave it out in front of your virtual house, and allow people to take it, because you *still* have a copy. It's non-rivalrous - that's it's amazing, wonderful, nature. Trying to make it rivalrous is like putting a mad with a red flag in front of a motor car because it goes too fast: it's *meant* to go fast.

And to those who riposte: What about the creators?, it's the usual answer. Being able to give away copies of your work freely is an opportunity, not a threat: it's called marketing, and it's cost has just gone down to nothing.

Follow me @glynmoody on Twitter or identi.ca.

07 September 2009

In Praise of the Book Sprint

One of the things that I find fascinating about open source is the way it generates epiphenomena - things that don't really happen with conventional computing. Here's another one: the book sprint.


The event is another in the growing body of FLOSS Manuals Book Sprints, kicked off by our first meeting to write a manual for Inkscape. The aim of these sprints is to write a book in 5 days. Actually, we have done it it in shorter time – in February of this year we wrote a 260 page manual introducing newbies to the Command Line in 2 days. Though created quickly, these books are extremely well written texts: comprehensive, readable, and complete.

Needless to say, as well as being about free software, these creations are imbued with its spirit:

A 220 page manual in 5 days - not bad. And it's all free, libre and gratis. Some of the material is also now being translated by the FLOSS Manuals Finnish community, and we hope more translations will follow.

Present at the sprint was myself (Adam Hyde, founder of FLOSS Manuals), Jan Gerber (ffmpeg2theora developer), Jörn Seger (Ogg Tools developer), Holmes Wilson (FSF Campaigns manager) and Theora geeks Susanne Lang and David Kühling. A few popped in remotely to help out, for which we are always grateful – notably Silvia Pfeiffer and Ogg K.

In the end we have free documentation that you can read online, download as a PDF, or log in and improve. It's also available in dead tree format for those who'd like it on their shelf.

Follow me @glynmoody on Twitter or identi.ca.

Lies, Damned Lies and Media Industry Numbers

A few months back, I wrote about how some figures quoted in the "Copycats" report produced by University College London's CIBER for the UK governmnent's Strategic Advisory Board for Intellectual Property Policy were based on nothing more than wishful thinking by the media industries. You would have thought that having been caught red-handed once, they might have stuck to the truth. It seems not:


The British Government's official figures on the level of illegal file sharing in the UK come from questionable research commissioned by the music industry, the BBC has revealed.

Specifically, we're talking about that emotive "7 million people" that are engaged in allegedly illegal file sharing:

As if the Government taking official statistics directly from partisan sources wasn't bad enough, the BBC reporter Oliver Hawkins also found that the figures were based on some highly questionable assumptions.

The 7m figure had actually been rounded up from an actual figure of 6.7m. That 6.7m was gleaned from a 2008 survey of 1,176 net-connected households, 11.6% of which admitted to having used file-sharing software - in other words, only 136 people.

It gets worse. That 11.6% of respondents who admitted to file sharing was adjusted upwards to 16.3% "to reflect the assumption that fewer people admit to file sharing than actually do it." The report's author told the BBC that the adjustment "wasn't just pulled out of thin air" but based on unspecified evidence.

The 6.7m figure was then calculated based on the estimated number of people with internet access in the UK. However, Jupiter research was working on the assumption that there were 40m people online in the UK in 2008, whereas the Government's own Office of National Statistics claimed there were only 33.9m people online during that year.

If the BPI-commissioned Jupiter research had used the Government's online population figures, the total number of file sharers would be 5.6m. If the researchers hadn't adjusted their figures upwards, the total number of file sharers would be only 3.9m - or just over half the figure being bandied about by the Government.

I don't want to focus on the way the government supinely relies on the media industry for its "data", or the fact that the media industry continues to resort to these fabricated figures to justify its insane actions. Instead, I'd like to look at two other aspects.

First, let's give some kudos to the BBC for deciding to investigate these figures. At a time when the BBC is under attack (a) from interested parties like James Murdoch for daring to exist, and from (b) trouble-makers like me over its weak coverage of the computing sector, it's great to see some great reporting from it.

But what I really want to underline here is the own goal scored by the content industries. The more plausible 3.9 million figure mentioned above would have served their purposes admirably: it's quite big, and so is "shocking" enough. By foolishly going for the 7 million figure, the media moguls have dug their own grave.

By quoting that number, they are effectively saying a vast swathe of the UK population is engaged in that activity. And as history teaches us, when such a vast proportion of a nation is doing something that is technically breaking the law, this shows not that these people are bad, but that such a law is self-evidently unjust to that nation.

So, whether we believe it or not, we should use this 7 million figure, and throw it back in the face of the media industries as proof that they are totally alienated from their customers. And based on that, we should invite them either to show that they do indeed care about such people by changing their approach radically, or at least frankly to admit what seems obvious to any dispassionate observer: that they actually hate their customers for revealing them to be liars, bullies, cheats and fools.

Follow me @glynmoody on Twitter or identi.ca.