09 April 2009

*Truly* Open Education

Here's some brilliant out-of-the-box thinking by Tony Hirst on how higher education should really be done:

imagine this: when you start your degree, you sign up to the 100 quid a year subscription plan (maybe with subscription waiver while you’re still an undergrad). When you leave, you get a monthly degree top-up. Nothing too onerous, just a current awareness news bundle made up from content related to the undergrad courses you took. This content could be produced as a side effect of keeping currently taught courses current: as a lecturer updates their notes from one year to the next, the new slide becomes the basis for the top-up news item. Or you just tag each course, and then pass on a news story or two discovered using that tag (Martin, you wanted a use for the Guardian API?!;-)

Having the subscription in place means you get 100 quid a year per alumni, without having to do too much at all…and as I suspect we all know, and maybe most of us bear testament to, once the direct debit is in place, there can be quite a lot of inertia involved in stopping it…

But there’s more - because you also have an agreement with the alumni to send them stuff once a month (and as a result maybe keep the alumni contacts database up to date a little more reliably?). Like the top-up content that is keeping their degree current (err….? yeah, right…)…

…and adverts… adverts for proper top-up/CPD courses, maybe, that they can pay to take…

…or maybe they can get these CPD courses for “free” with the 1000 quid a year, all you can learn from, top-up your degree content plan (access to subscription content and library services extra…)

Or how about premium “perpetual degree” plans, that get you a monthly degree top-up and the right to attend one workshop a year “for free” (with extra workshops available at cost, plus overheads;-)

Quick: put this man in charge of the music industry....

Follow me on Twitter @glynmoody

Proof That Some Lawyers *Do* Get It

Not just right, but very well put:

I often explain to clients and prospective clients that the main reward for a great, original product is a successful business based on that product. Intellectual property notwithstanding, the best way to protect most great ideas is by consistently excellent execution, high quality, responsive customer service, continued innovation and overall staying ahead of the competition by delivering more value. Absent the rare circumstance of an entire industry dedicated to counterfeits, à la Vuitton, if an enterprise can’t fathom protecting its value proposition without some kind of gaudy trademark protection, ultimately something has to give.

Fender, according to the record in this opinion, understood the truth well for decades. It warned consumers to stick with its quality “originals” and not to be fooled by “cheap imitations,” and it flourished. But for all these years, Fender never claimed to think the sincerest form of flattery was against the law. Only in the feverish IP-crazy atmosphere of our current century did the company deem it “necessary” to spend a fortune that could have been used on product development, marketing or any darned thing on a quixotic quest for a trademark it never believed in itself. That is more than an impossible dream — it’s a crying shame.

(Via Luis Villa's blog.)

Follow me on Twitter @glynmoody

French "Three Strikes" Law Unexpectedly Thrown Out

In an incredible turn of events, the French HADOPI legislation, which seemed certain to become law, has been thrown out:

French lawmakers have unexpectedly rejected a bill that would have cut off the Internet connections of people who illegally download music or films.

On Open Enterprise blog.

Follow me on Twitter @glynmoody

Should an Open Source Licence Ever Be Patent-Agnostic?

Sharing lies at the heart of free software, and drives much of its incredible efficiency as a development methodology. It means that coders do not have to re-invent the wheel, but can borrow from pre-existing programs. Software patents, despite their name, are about locking down knowledge so that it cannot be shared without permission (and usually payment). But are there ever circumstances when software patents that require payment might be permitted by an open source licence? That's the question posed by a new licence that is being submitted to the Open Source Inititative (OSI) for review.

On Linux Journal.

Follow me on Twitter @glynmoody

08 April 2009

Time to Get Rid of ICANN

ICANN has always been something of a disaster area, showing scant understanding of what the Internet really is, contemptuous of its users, and largely indifferent to ICANN's responsibilities as guardian of a key part of its infrastructure. Here's the latest proof that ICANN is not fit for its purpose:


The familiar .com, .net, .org and 18 other suffixes — officially "generic top-level domains" — could be joined by a seemingly endless stream of new ones next year under a landmark change approved last summer by the Internet Corp. for Assigned Names and Numbers, the entity that oversees the Web's address system.

Tourists might find information about the Liberty Bell, for example, at a site ending in .philly. A rapper might apply for a Web address ending in .hiphop.

"Whatever is open to the imagination can be applied for," says Paul Levins, ICANN's vice president of corporate affairs. "It could translate into one of the largest marketing and branding opportunities in history."

Got that? This change is purely about "marketing and branding opportunities"...the fact that it will fragment the Internet, sow confusion among hundreds of millions of users everywhere, and lead to the biggest explosion of speculative domain squatting and hoarding by parasites who see the Internet purely as a system to be gamed, is apparently a matter of supreme indifference to those behind ICANN: the main thing is that it's a juicy business opportunity.

Time to sack the lot, and put control of the domain name system where it belongs: in the hands of engineers who care.

Follow me on Twitter @glynmoody

Second Life + Moodle = Sloodle

Moodle is one of open source's greatest success stories. It's variously described as an e-learning or course management system. Now, given that education is also one of the most popular applications of Second Life, it would be a natural fit somehow to meld Moodle and Second Life. Enter Sloodle, whose latest version has just been released:

Version 0.4 integrates Second Life 3D classrooms with Moodle, the world’s most popular open source e-learning system with over 30 million users (http://www.moodle.org). This latest release allows teachers and students to prepare materials in an easy-to-use, web-based environment and then log into Second Life to put on lectures and student presentations using their avatars.

The new tools also let students send images from inside Second Life directly to their classroom blog. Students are finding this very useful during scavenger hunt exercises where teachers send them to find interesting content and bring it back to report to their classmates.

Tools that cross the web/3D divide are becoming more popular as institutions want to focus on the learning content rather than the technical overhead involved in orienting students into 3D settings and avatars.

As an open-source platform SLOODLE is both freely available and easily enhanced and adapted to suit the needs of diverse student populations. And web hosts are lining up to support the platform. A number of third-party web hosts now offer Moodle hosting with SLOODLE installed either on request or as standard, making easier than ever to get started with SLOODLE.

SLOODLE is funded and supported by Eduserv - http://www.eduserv.ac.uk/ and is completely free for use under the GNU GPL license.

The project was founded by Jeremy Kemp of San José State University, California, and Dr. Daniel Livingstone of the University of the West of Scotland, UK.

Follow me on Twitter @glynmoody

Forward to the Past with Forrester

Looking at Forrester's latest report on open source , I came across the following:

The bottom line is that in most application development shops, the use of open source software has been a low-level tactic instead of a strategic decision made by informed executives. As a result, while there’s executive awareness of the capital expenditure advantages of adopting OSS, other benefits, potential risks, and the structural changes required to take full advantage of OSS are still poorly understood.

On Open Enterprise blog.

Follow me on Twitter @glynmoody

Open Access as "Middleware"

Interesting simile:

A "legacy system" in the world of computing provides a useful analogy for understanding the precarious state of contemporary academic publishing. This comparison might also keep us from stepping backward in the very act of stepping forward in promoting Open Access publishing and Institutional Repositories. I will argue that, vital as it is, the Open Access movement should really be seen in its current manifestation as academic "middleware" servicing the "legacy system" of old-school scholarship.

(Via Open Access News.)

Follow me on Twitter @glynmoody

07 April 2009

OpenStreetMap Navigates to Wikipedia

One of the powerful features of open source is re-use: you don't have to re-invent the wheel, but can build on the work of others. That's straightforward enough for software, but it can also be applied to other fields of openness. Here's a fantastic example: embedding OpenStreetMap in Wikipedia entries:


For some time, there have been efforts to bring OpenStreetMap (OSM) and Wikipedia closer together. Both projects have the mission to produce free knowledge and information through a collaborative community process. Because of the similarities, there are many users active in both projects – however mutual integration is still lacking.

For this reason, Wikimedia Deutschland (WM-DE, the German Chapter of Wikimedia) are providing funds of 15.000 Euro (almost $20k) and starting a corresponding pilot project. A group of interested Wikipedians and OSM users have partnered up to reach two goals: The integration of OSM-maps in Wikipedia and the installation of a map toolserver. The map toolserver will serve to prototype new mapping-related projects and preparing them for deployment on the main Wikimedia cluster.

Here's how it will work:

Maps are an important part of the information in encyclopaedic articles - however currently mostly static maps are used. With interactive free maps and a marking system a way of presenting information can be created.

For some time there have been MediaWiki Extensions available for embedding OpenStreetMap maps into MediaWiki. That's a great start, but it isn't enough. If these extensions were deployed on Wikipedia without any kind of proxy set-up, the OpenStreetMap tile servers would struggle to handle the traffic.

One of our aims is to build an infrastructure in the Wikimedia projects that allows us to keep the OSM data, or at least the tile images, ready locally in the Wikimedia network. We still have to gain a some experience about this, but we are optimistic about that. On one side, we have a number of Wikipedians in the team, who are versed in MediaWiki and scaling software systems, and on the other side we have OSM users who can set up the necessary geo database.

We learned much from the use of the Wikimedia-Toolservers – for example that on a platform for experimenting much more useful tools were developed than it was predicted. Interested developers have a good starting position to develop new tools with new possibilities.

We expect similar results from the map toolserver. As soon as it is online, everyone who is interested and presents his ideas of development projects and his state of knowledge can apply for an account. We want to allow as many users as possible to implement their ideas without having to care about the basic setup. We hope that in the spirit of the creation and distribution of free content many new maps and visualisations emerge.

Now, it's happening:

There has been rapid progress on the subject of adding OpenStreetMap maps to Wikimedia projects (e.g. Wikipedia) during the MediaWiki Developer Meet-Up taking place right now in Berlin.

Maps linked to Wikipedia content *within* Wikipedia: I can't wait.

Follow me on Twitter @glynmoody

Transparency and Open Government

Not my words, but those of that nice Mr Obama:


My Administration is committed to creating an unprecedented level of openness in Government. We will work together to ensure the public trust and establish a system of transparency, public participation, and collaboration. Openness will strengthen our democracy and promote efficiency and effectiveness in Government.

Wow.

Specifically:

Government should be transparent. Transparency promotes accountability and provides information for citizens about what their Government is doing.

...

Government should be participatory. Public engagement enhances the Government's effectiveness and improves the quality of its decisions.

...

Government should be collaborative. Collaboration actively engages Americans in the work of their Government.

Read the whole thing - and weep for poor old, locked-up UK....

Follow me on Twitter @glynmoody

Google and Microsoft Agree: This is Serious

You know things are bad when a coalition includes Google and Microsoft agreeing on something...

On Open Enterprise blog.

Follow me on Twitter @glynmoody

RFCs: Request for Openness

There's a fascinating history of the RFCs in the New York Times, written by a person who was there at the beginning:

Our intent was only to encourage others to chime in, but I worried we might sound as though we were making official decisions or asserting authority. In my mind, I was inciting the wrath of some prestigious professor at some phantom East Coast establishment. I was actually losing sleep over the whole thing, and when I finally tackled my first memo, which dealt with basic communication between two computers, it was in the wee hours of the morning. I had to work in a bathroom so as not to disturb the friends I was staying with, who were all asleep.

Still fearful of sounding presumptuous, I labeled the note a “Request for Comments.” R.F.C. 1, written 40 years ago today, left many questions unanswered, and soon became obsolete. But the R.F.C.’s themselves took root and flourished. They became the formal method of publishing Internet protocol standards, and today there are more than 5,000, all readily available online.

For me, most interesting comments are the following:

The early R.F.C.’s ranged from grand visions to mundane details, although the latter quickly became the most common. Less important than the content of those first documents was that they were available free of charge and anyone could write one. Instead of authority-based decision-making, we relied on a process we called “rough consensus and running code.” Everyone was welcome to propose ideas, and if enough people liked it and used it, the design became a standard.

After all, everyone understood there was a practical value in choosing to do the same task in the same way. For example, if we wanted to move a file from one machine to another, and if you were to design the process one way, and I was to design it another, then anyone who wanted to talk to both of us would have to employ two distinct ways of doing the same thing. So there was plenty of natural pressure to avoid such hassles. It probably helped that in those days we avoided patents and other restrictions; without any financial incentive to control the protocols, it was much easier to reach agreement.

This was the ultimate in openness in technical design and that culture of open processes was essential in enabling the Internet to grow and evolve as spectacularly as it has. In fact, we probably wouldn’t have the Web without it. When CERN physicists wanted to publish a lot of information in a way that people could easily get to it and add to it, they simply built and tested their ideas. Because of the groundwork we’d laid in the R.F.C.’s, they did not have to ask permission, or make any changes to the core operations of the Internet. Others soon copied them — hundreds of thousands of computer users, then hundreds of millions, creating and sharing content and technology. That’s the Web.

I think this is right: the RFCs are predicated on complete openness, where anyone can make suggestions and comments. The Web built on that basis, extending the possibility of openness to everyone on the Internet. In the face of attempts to kill net neutrality in Europe, it's something we should be fighting for.

Follow me on Twitter @glynmoody

06 April 2009

The Latest Act in the ACTA Farce

I think the Anti-Counterfeiting Trade Agreement(ACTA) will prove something of a watershed in the negotiations of treaties. We have already gone from a situation where governments around the world have all-but denied the thing existed, to the point where the same people are now scrambling to create some semblance of openness without actually revealing too much.

Here's the latest attempt, which comes from the US team:

A variety of groups have shown their interest in getting more information on the substance of the negotiations and have requested that the draft text be disclosed. However, it is accepted practice during trade negotiations among sovereign states to not share negotiating texts with the public at large, particularly at earlier stages of the negotiation. This allows delegations to exchange views in confidence facilitating the negotiation and compromise that are necessary in order to reach agreement on complex issues. At this point in time, ACTA delegations are still discussing various proposals for the different elements that may ultimately be included in the agreement. A comprehensive set of proposals for the text of the agreement does not yet exist.

This is rather amusing. On the one hand, the negotiators have to pretend that "a comprehensive set of proposals for the text of the agreement does not yet exist", so that we can't find out the details; on the other, they want to finish off negotiations as quickly as possible, so as to prevent too many leaks. Of course, they can't really have it both ways, which is leading to this rather grotesque dance of the seven veils, whereby bits and pieces are revealed in an attempt to keep us quiet in the meantime.

The latest summary does contain some interesting background details that I'd not come across before:

In 2006, Japan and the United States launched the idea of a new plurilateral treaty to help in the fight against counterfeiting and piracy, the so-called Anti-Counterfeiting Trade Agreement (ACTA). The aim of the initiative was to bring together those countries, both developed and developing, that are interested in fighting counterfeiting and piracy, and to negotiate an agreement that enhances international co-operation and contains effective international standards for enforcing intellectual property rights.

Preliminary talks about such an anti-counterfeiting trade agreement took place throughout 2006 and 2007 among an initial group of interested parties (Canada, the European Commission, Japan, Switzerland and the United States). Negotiations started in June 2008 with the participation of a broader group of participants (Australia, Canada, the European Union and its 27 member states, Japan, Mexico, Morocco, New Zealand, Republic of Korea, Singapore, Switzerland and the United States).

The rest, unfortunately, is the usual mixture of half-truths and outright fibs. But this constant trickle of such documents shows that they are taking notice of us, and that we must up the pressure for full disclosure of what exactly is being negotiated in our name.

Follow me on Twitter @glynmoody

A Different Kind of Wörterbuch

Linguee seems to offer an interesting twist on a boring area - bilingual dictionaries:

With Linguee, you can search for words and expressions in many millions of bilingual texts in English and German. Every expression is accompanied by useful additional information and suitable example sentences.

...

When you translate texts to a foreign language, you usually look for common phrases rather than translations of single words. With its intelligent search and the significantly larger amount of stored text content, Linguee is the right tool for this task. You find:

* In what context a translation is used
* How frequent a particular translation is
* Example sentences: How have other people translated an expression?

By searching not only for a single word, but for a respective word in its context, you can easily find a translation that fits optimal in context. With its large number of entries, Linguee often retrieves translations of rare terms that you don't find anywhere else.

There two other points of interest. The source of the texts:

Our most important source is the bilingual web. Other valuable sources include EU documents and patent specifications.

And the fact that a "GPL version of the Linguee dictionary" is available.

Follow me on Twitter @glynmoody

Google's Perpetual Monopoly on Orphan Works

Here's an interesting analysis of the Google Book Search settlement. This, you will recall, resolved the suit that authors and publishers brought against Google for scanning books without permission - something it maintained it could do without, since it only wanted to index its contents, not display them in their entirely.

At first this looked like an expensive and unnecessary way out for Google: many hoped that it would fight in the courts to determine what was permitted under fair use. But as people have had time to digest its implications, the settelement is beginning to look like a very clever move:


Thanks to the magic of the class action mechanism, the settlement will confer on Google a kind of legal immunity that cannot be obtained at any price through a purely private negotiation. It confers on Google immunity not only against suits brought by the actual members of the organizations that sued Google, but also against suits brought by anyone who doesn’t explicitly opt out. That means that Google will be free to mine the vast body of orphan works without fear of liability.

Any competitor that wants to get the same legal immunity Google is getting will have to take the same steps Google did: start scanning books without the publishers’ and authors’ permission, get sued by authors and publishers as a class, and then negotiate a settlement. The problem is that they’ll have no guarantee that the authors and publishers will play along. The authors and publishers may like the cozy cartel they’ve created, and so they may have no particular interest in organizing themselves into a class for the benefit of the new entrant. Moreover, because Google has established the precedent that “search rights” are something that need to be paid for, it’s going to be that much harder for competitors to make the (correct, in my view) argument that indexing books is fair use.

It seems to me that, in effect, Google has secured for itself a perpetual monopoly over the commercial exploitation of orphan works. Google’s a relatively good company, so I’d rather they have this monopoly than the other likely candidates. But I certainly think it’s a reason to be concerned.

Cunning.

Follow me on Twitter @glynmoody

All Tatarstan Schools Moving to Free Software

Tatarstan is the place to be:

До конца текущего года все школы Татарстана планируется перевести на свободное программное обеспечение на базе операционной системы «Linux».

...

По словам замминистра, в каждой школе республики на уровне кружков планируется открыть курсы по обучению работе в «Linux» учащихся. Но до этого предстоит еще подготовить специалистов, которые будут руководить этими кружками.

Людмила Нугуманова заявила, что Татарстан полностью перейдет на программное обеспечение с открытым кодом на основе операционной системы «Linux». Ведь в 2010 году закончится подписка на лицензию базового пакета программного обеспечения для школ на платформе «Microsoft». «За продолжение подписки придется платить немалые деньги, либо остаться на нашем отечественном продукте «Linux», - отметила она.

Как сообщила начальник отдела развития информационных технологий в образовании Министерства образования и науки РТ Надежда Сулимова, в прошлом году новый софт установлен в 612 школах республики (всего в Татарстане функционируют почти 2,4 тысячи общеобразовательных учреждений).

[Via Google Translate: By the end of this year, all schools of Tatarstan plans to transfer to the free software operating system based on «Linux».

...

According to the Deputy Minister, in each of the school-level workshops are planned to open courses on the work of «Linux» students. But before that is still to prepare professionals who will lead these clubs.

Ludmila Nugumanova said that Tatarstan is fully pass on the software and open source based operating system «Linux». Indeed, in 2010, will end subscription base license software package for schools on the platform «Microsoft». «For the continuation of subscriptions to pay a lot of money, or stay on our domestic product« Linux », - she said.

As the head of the department of information technology in education the Ministry of Education and Science of the Republic of Tatarstan Hope Sulimova, last year, new software is installed in 612 schools (only in Tatarstan there are almost 2,4 thousand general educational institutions).]

Follow me on Twitter @glynmoody

How Can We Save Thunderbird Now Email is Dying?

I like Thunderbird. I've been using it for years, albeit now more as a backup for my Gmail account than as my primary email client. But it's always been the Cinderella of the Mozilla family, rather neglected compared to its more glamorous sister Firefox. The creation of the Mozilla Messaging subsidiary of the Mozilla Foundation means that efforts are already underway to remedy that. But there's a deeper problem that Thunderbird needs to face, too....

On Open Enterprise blog.

Follow me on Twitter @glynmoody

05 April 2009

Top 10 Measurements for Open Government

One of the most exciting applications of openness in recent months has been to government. A year ago, open government was sporadic and pretty forlorn as a field; today it is flourishing, notable under the alternative name of "transparency". At the forefront of that drive is the Sunlight Foundation, which has just published a suggested top 10 measurements of just how open a government real is:


1. Open data: The federal government should make all data searchable, findable and accessible.

2. Disclose spending data: The government should disclose how it is spending taxpayer dollars, who is spending it and how it’s being spent.

3. Procurement data: How does the government decide where the money is getting spent, who gets it, how they are spending it and how can we measure success.

4. Open portal for public request for information: There should be a central repository for all Freedom of Information Act requests that are public to that people can see in real time when the requests come in, how fast the government responds to them.

5. Distributed data: The government should make sure it builds redundancy in their system so that data is not held in just one location, but held in multiple places in case of a disaster, terrorist attack or some other reason where the data is damaged. Redundancy would guarantee government could rebuild the data for future use.

6. Open meetings: Government meetings should be open to the public so that citizens can tell who is trying to influence government. All schedules should be published as soon as they happen so that people can see who is meeting with whom and who is trying to influence whom.

7. Open government research: Currently, when government conducts research, it usually does not report the data it collects until the project is finished. Government should report its research data while its being collected in beta form. This would be a measure of transparency and would change the relationship that people have to government research as it is being collected.

8. Collection transparency: Government should disclose how it is collecting information, for whom are they collecting the data, and why is it relevant. The public should have the ability to judge whether or not it valuable to them, and giving them the ability to comment on it.

9. Allowing the public to speak directly to the president: Recently, we saw the president participate in something called “Open for Questions,” where he gave the public access to ask questions. This allowed him to burst his bubble and be in touch with the American public directly is another measure of transparency.

10. Searchable, crawl able and accessible data: If the government were to make all data searchable, crawl able and accessible we would go along way in realizing all the goals presented at the Gov 2.0 Camp.

Great stuff, exciting times. Now, if only the UK government could measure up to these....

Follow me on Twitter @glynmoody

Who Can Put the "Open" in Open Science?

One of the great pleasures of blogging is that your mediocre post tossed off in a couple of minutes can provoke a rather fine one that obviously took some time to craft. Here's a case in point.

The other day I wrote "Open Science Requires Open Source". This drew an interesting comment from Stevan Harnad, pretty much the Richard Stallman of open access, as well as some tweets from Cameron Neylon, one of the leading thinkers on and practitioners of open science. He also wrote a long and thoughtful reply to my post (including links to all our tweets, rigorous chap that he is). Most of it was devoted to pondering the extent to which scientists should be using open source:

It is easy to lose sight of the fact that for most researchers software is a means to an end. For the Open Researcher what is important is the ability to reproduce results, to criticize and to examine. Ideally this would include every step of the process, including the software. But for most issues you don’t need, or even want, to be replicating the work right down to the metal. You wouldn’t after all expect a researcher to be forced to run their software on an open source computer, with an open source chipset. You aren’t necessarily worried what operating system they are running. What you are worried about is whether it is possible read their data files and reproduce their analysis. If I take this just one step further, it doesn’t matter if the analysis is done in MatLab or Excel, as long as the files are readable in Open Office and the analysis is described in sufficient detail that it can be reproduced or re-implemented.

...

Open Data is crucial to Open Research. If we don’t have the data we have nothing to discuss. Open Process is crucial to Open Research. If we don’t understand how something has been produced, or we can’t reproduce it, then it is worthless. Open Source is not necessary, but, if it is done properly, it can come close to being sufficient to satisfy the other two requirements. However it can’t do that without Open Standards supporting it for documenting both file types and the software that uses them.

The point that came out of the conversation with Glyn Moody for me was that it may be more productive to focus on our ability to re-implement rather than to simply replicate. Re-implementability, while an awful word, is closer to what we mean by replication in the experimental world anyway. Open Source is probably the best way to do this in the long term, and in a perfect world the software and support would be there to make this possible, but until we get there, for many researchers, it is a better use of their time, and the taxpayer’s money that pays for that time, to do that line fitting in Excel. And the damage is minimal as long as source data and parameters for the fit are made public. If we push forward on all three fronts, Open Data, Open Process, and Open Source then I think we will get there eventually because it is a more effective way of doing research, but in the meantime, sometimes, in the bigger picture, I think a shortcut should be acceptable.

I think these are fair points. Science needs reproduceability in terms of the results, but that doesn't imply that the protocols must be copied exactly. As Neylon says, the key is "re-implementability" - the fact that you *can* reproduce the results with the given information. Using Excel instead of OpenOffice.org Calc is not a big problem provided enough details are provided.

However, it's easy to think of circumstances where *new* code is being written to run on proprietary engines where it is simply not possible to check the logic hidden in the black boxes. In these circumstances, it is critical that open source be used at all levels so that others can see what was done and how.

But another interesting point emerged from this anecdote from the same post:

Sometimes the problems are imposed from outside. I spent a good part of yesterday battling with an appalling, password protected, macroed-to-the-eyeballs Excel document that was the required format for me to fill in a form for an application. The file crashed Open Office and only barely functioned in Mac Excel at all. Yet it was required, in that format, before I could complete the application.

Now, this is a social issue: the fact that scientists are being forced by institutions to use proprietary software in order to apply for grants or whatever. Again, it might be unreasonable to expect young scientists to sacrifice their careers for the sake of principle (although Richard Stallman would disagree). But this is not a new situation. It's exactly the problem that open access faced in the early days, when scientists just starting out in their career were understandably reluctant to jeopardise it by publishing in new, untested journals with low impact factors.

The solution in that case was for established scientists to take the lead by moving their work across to open access journals, allowing the latter to gain in prestige until they reached the point where younger colleagues could take the plunge too.

So, I'd like to suggest something similar for the use of open source in science. When established scientists with some clout come across unreasonable requirements - like the need to use Excel - they should refuse. If enough of them put their foot down, the organisations that lazily adopt these practices will be forced to change. It might require a certain courage to begin with, but so did open access; and look where *that* is now...

Follow me on Twitter @glynmoody

03 April 2009

Why We Should Teach Maths with Open Source

Recently, I was writing about science and open source (of which more anon); here are some thoughts on why maths ought to be taught using free software:

I personally feel it is terrible to *train* students mainly to use closed source commercial mathematics software. This is analogous to teaching students some weird version of linear algebra or calculus where they have to pay a license fee each time they use the fundamental theorem of calculus or compute a determinant. Using closed software is also analogous to teaching those enthusiastic students who want to learn the proofs behind theorems that it is illegal to do so (just as it is literally illegal to learn *exactly* how Maple and Mathematica work!). From a purely practical perspective, getting access to commercial math software is very frustrating for many students. It should be clear that I am against teaching mathematics using closed source commercial software.

Follow me on Twitter @glynmoody

User-Generated Content: Microsoft vs. Google

Back in November I was urging you to submit your views on a consultation document on the role of copyright in the knowledge economy, put out by the European Commission. The submissions have now been published online, and I'm deeply disappointed to see that not many of took a blind bit of notice of my suggestion...

On Open Enterprise blog.

Follow me on Twitter @glynmoody

HADOPI Law Passed - by 12 Votes to 4

What a travesty of democracy:

Alors que le vote n'était pas prévu avant la semaine prochaine, les quelques députés présents à l'hémicycle à la fin de la discussion sur la loi Création et Internet ont été priés de passer immédiatement au vote, contrairement à l'usage. La loi a été adoptée, en attendant son passage en CMP puis au Conseil Constitutionnel.

On peine à en croire la démocratie dans laquelle on prétend vivre et écrire. Après 41 heures et 40 minutes d'une discussion passionnée sur le texte, il ne restait qu'une poignée de courageux députés autour de 22H45 jeudi soir lorsque l'Assemblée Nationale a décidé, sur instruction du secrétaire d'Etat Roger Karoutchi, de passer immédiatement au vote de la loi Création et Internet, qui n'était pas attendu avant la semaine prochaine. Un fait exceptionnel, qui permet de masquer le nombre important de députés UMP qui se seraient abstenus si le vote s'était fait, comme le veut la tradition, après les questions au gouvernment mardi soir. Ainsi l'a voulu Nicolas Sarkozy.

...

Quatre députés ont voté non (Martine Billard, Patrick Bloche et deux députés non identifiés), et une dizaine de mains se sont levées sur les bancs de la majorité pour voter oui. En tout, 16 députés étaient dans l'hémicycle au moment du vote.

[Via Google Translate: While the vote was not expected until next week, the few members in the chamber at the end of the discussion on the Creation and Internet law were invited to proceed immediately to vote, contrary to custom.The law was passed, until it passes then CMP in the Constitutional Council.

It is difficult to believe in democracy in which we aim to live and write. After 41 hours and 40 minutes of passionate discussion on the text, there remained only a handful of courageous members around 22:45 Thursday evening when the National Assembly decided, on the instructions of the Secretary of State Roger Karoutchi to pass immediately to vote on the Creation and Internet law, which was not expected before next week. One exception, which allows you to hide the large number of UMP deputies who would have abstained if the vote had been, as tradition dictates, after the government issues Tuesday night. Thus wished Nicolas Sarkozy.

...

Pack is voted. Four members voted no (Martine Billard, Patrick Bloche and two unidentified deputies), and a dozen hands were raised on the banks of the majority to vote yes. In all, 16 MPs were in the chamber for the vote.]

So one of the most important, and contentious piece of legislation in recent years is passed by trickery. In this way, those pushing this law have shown their true colours and their contempt for the democratic process.

Follow me on Twitter @glynmoody

02 April 2009

"Piracy Law" Cuts *Traffic* not "Piracy"

This story is everywhere today:


Internet traffic in Sweden fell by 33% as the country's new anti-piracy law came into effect, reports suggest.

Sweden's new policy - the Local IPRED law - allows copyright holders to force internet service providers (ISP) to reveal details of users sharing files.

According to figures released by the government statistics agency - Statistics Sweden - 8% of the entire population use peer-to-peer sharing.

The implication in these stories is that this kind of law is "working", in the sense that it "obviously" cuts down copyright infringement, because it's cutting down traffic.

In your dreams.

All this means is that people aren't sharing so much stuff online. But now that you can pick up a 1 Terabyte external hard drive for less than a hundred quid - which can store about a quarter of a million songs - guess what people are going to turn to in order to swap files in the future?

Follow me on Twitter @glynmoody

Open Science Requires Open Source

As Peter Suber rightly points out, this paper offers a reversal of the usual argument, where open access is justified by analogy with open source:


Astronomical software is now a fact of daily life for all hands-on members of our community. Purpose-built software for data reduction and modeling tasks becomes ever more critical as we handle larger amounts of data and simulations. However, the writing of astronomical software is unglamorous, the rewards are not always clear, and there are structural disincentives to releasing software publicly and to embedding it in the scientific literature, which can lead to significant duplication of effort and an incomplete scientific record. We identify some of these structural disincentives and suggest a variety of approaches to address them, with the goals of raising the quality of astronomical software, improving the lot of scientist-authors, and providing benefits to the entire community, analogous to the benefits provided by open access to large survey and simulation datasets. Our aim is to open a conversation on how to move forward.

The central argument is important: that you can't do science with closed source software, because you can't examine its assumptions or logic (that "incomplete scientific record"). Open science demands open source.

Follow me on Twitter @glynmoody

Second Chance at Life

Two years ago, the virtual world Second Life was everywhere, as pundits and press alike rushed to proclaim it as the Next Big Digital Thing. Inevitably, the backlash began soon afterwards. The company behind it, Linden Lab, lost focus and fans; key staff left. Finally, last March, Second Life's CEO, creator and visionary, Philip Rosedale, announced that he was taking on the role of chairman of the board, and bringing in fresh leadership. But against an increasingly dismal background, who would want to step into his shoes?

From the Guardian.

Follow me on Twitter @glynmoody