14 April 2009

Up Next for UK: Ban on Photos of CCTVs?

This is too rich:

The man probing police conduct over the death of a newspaper seller during the G20 protests was wrong to claim there were no CCTV cameras in the area near the Bank of England, it was revealed today.

Several cameras could have captured footage of the incident two weeks ago, contradicting comments made by Nick Hardwick, the chairman of the Independent Police Complaints Commission.

Mr Hardwick made the claim in response to the IPCC being accused of sweeping away evidence of police brutality.

I don't know which is more breathtaking: the fact that he said it, or the fact that he thought it wouldn't be checked and found to be inconsistent with reality, as the Daily Mail pictures prove.

Which brings up the interesting possibility that having banned photos of the police - so as to avoid members of the public taking evidence of police brutality - the next logical step would be to forbid people to take photos of CCTVs or to talk about their location - because it would "help terrorists" - so that the police can then claim that they don't exist in an area where police brutality has taken place.

And if you think that's utterly impossible, you haven't been paying attention.... (Via @stevepurkiss.)

Update: What a surprise, the IPCC has suddenly found those errant CCTV cameras. Amazing how a picture can change one's perception.

Follow me on Twitter @glynmoody

Channelling the Power of Open Source

This blog tends to concentrate on two broad aspects of open source: the issues that affect enterprise users, and the companies based around creating free software. But this misses out a crucial player, that of the “channel”, also known by the equally unhelpful name of “value-added resellers”, or VARs....

On Open Enterprise blog.

Follow me on Twitter @glynmoody

Let's Drop this Insulting “Digital Piracy” Meme

Until recently, piracy referred to the lawless times and characters of the 17th and 18th centuries – or, if closer to the present, to artful/humorous representations of them in books, theatre and film. This has allowed the media industries to appropriate the historical term, and re-fashion it for their own purposes. And they have been highly successful: even normally sane journalists now write about “software piracy”, or “music piracy”....

On Open Enterprise blog.

Follow me on Twitter @glynmoody

13 April 2009

Of Bruce's Law and Derek's Corollary

Much will be written about the events of the last few days concerning the leaked Labour emails, and the plans to create a scurrilous blog. The focus will rightly be on the rise of blogs as a powerful force within the world of journalism, fully capable of bringing down politicians. But here I'd like to examine an aspect that I suspect will receive far less attention.

At the centre of the storm are the emails: what they say, who sent them and who received them. One suggestion was that they were stolen from a cracked account, but that version seems increasingly discounted in favour of the idea that someone who disapproved of the emails' contents simply leaked them. What's interesting for me is how easy this has become.

Once upon a time – say, ten years ago – you would have needed to break into an office somewhere to steal a document in order to leak it. Now, though, the almost universal use of computers means that all this stuff is handily stored in digital form. As a result, sending it to other people is as simple as writing their name (or just the first few letters of their name, given the intelligence built into email clients these days.) This means that multiple copies probably exist in different physical locations.

Moreover, making a further copy leaves *no* trace whatsoever; indeed, the whole of the Internet is based on copies, so creating them is nothing special. Trying to stop copies being made of a digital document, once sent out, is an exercise in futility, because that implies being in control of multiple pre-existing copies at multiple locations – possibly widely separated.

Bruce Schneier has memorably written "trying to make digital files uncopyable is like trying to make water not wet." I'd like to call this Bruce's Law. What has happened recently to the Labour emails is an inevitable consequence of Bruce's Law – the fact that digital documents, once circulated, can and will be copied. Tender and thoughtful alike, perhaps we should dub this fact as Derek's Corollary, in honour of one of the people who has done so much to bring its effects to our attention.

Follow me on Twitter @glynmoody

Why, Actually, Are They Hiding ACTA?

One of the curious aspects of articles and posts about the Anti-Counterfeiting Trade Agreement (ACTA) is that it's all a kind of journalistic shadow-boxing. In the absence of the treaty text, everybody has been relying on leaks, and nudges and winks in the form of official FAQs and “summaries” to give them some clue as to its content....

On Open Enterprise blog.

Follow me on Twitter @glynmoody

Mikeyy Update

OK, I seem to have regained control of my Twitter homepage, and cleared out the infection. But you might want to (a) be sceptical about it and other Twitter follow requests (b) use a non-Web Twitter client and (c) install the NoScript add-on for Firefox, which blocks the operation of Mikeyy and its ilk.

Apologies again.

Urgent: Do *Not* Vist My Twitter Page

My Twitter account has become infected with Mikeyy - ironically because I was checking out whether to block a new follower. Please ignore all my Twitter posts for the moment, especially the last one, which is fake and infected. And apologies to anyone who may already have been infected in this way.

It's slightly annoying that this is the not the first, but the second wave of such infections: I wish Twitter would get this vulnerability sorted out, or it will make Twitter unusable.

10 April 2009

Tesla Model S Sedan Runs on GNU/Linux...

...well, its "Haptic Entertainment And Navigation System" does:


Its a 17-inch LCD touch computer screen that has 3G or wireless connectivity. When we were in the car, the screen featured Google Maps. Tesla’s website verifies that the screen will be able to feature sites like Google Maps and Pandora Music. From what we saw yesterday, the screen is divided vertically into three separate areas: the maps/navigation screen, radio/entertainment area, and climate controls. The navigation screen has several tabs: “internet,” “navigation,” “car,” “backup,” and “phone.” The entertainment section has several tabs, including “audio,” “media,” “streaming,” “playlists,” “artists” and “songs.” The climate controls seem pretty standard. Our driver (see video) says that the computer is going to be run on some kind of Google Maps software and will feature a “full browser.” It’s not surprising that Google Maps is integrated into the interface, Google co-founders Sergey Brin and Larry Page are investors in Tesla. The dashboard is also an LCD touch screen. Tesla has also confirmed to us that the computer/entertainment center will be Linux-based.

GNU/Linux: it's the future.

Follow me on Twitter @glynmoody

How Apt: Apt-urls Arrive

One of the unsung virtues of open source is the ease with which you can add, remove and upgrade programs. In part, this comes down to the fact that all software is freely available, so you don't need to worry about cost: if you want it, you can have it. This makes installation pretty much a one-click operation using managers like Synaptic.

Now things have become even easier:


As of this morning, apt-urls are enabled on the Ubuntu Wiki. What does this mean? In simple terms, this feature provides a simple, wiki-based interface for apt, the base of our software management system. It means that we can now insert clickable links on the wiki that can prompt users to install software from the Ubuntu repositories.

That's pretty cool, but even more amazing is the fact that when I click on the link in the example on the above page, it *already* works:

If you are a Firefox user on Ubuntu, you will also note that the link I’ve provided here works, too. This is because Firefox also allows apt-urls to work in regular web pages.

Free software is just *so* far ahead of the closed stuff: how could anyone seriously claim that it doesn't innovate?

Follow me on Twitter @glynmoody

Open Sourcing 3D Printer Materials

I've written a fair amount about open source fabbers, but here's someone addressing another important aspect: open sourcing how to make the basic material used by 3D printers:

About five years ago, Mark Ganter, a UW mechanical engineering professor and longtime practitioner of 3-D printing, became frustrated with the high cost of commercial materials and began experimenting with his own formulas. He and his students gradually developed a home-brew approach, replacing a proprietary mix with artists' ceramic powder blended with sugar and maltodextrin, a nutritional supplement. The results are printed in a recent issue of Ceramics Monthly. Co-authors are Duane Storti, UW associate professor of mechanical engineering, and Ben Utela, a former UW doctoral student.

"Normally these supplies cost $30 to $50 a pound. Our materials cost less than a dollar a pound," said Ganter. He said he wants to distribute the free recipes in order to democratize 3-D printing and expand the range of printable objects.

(Via Boing Boing.)

Follow me on Twitter @glynmoody

09 April 2009

OpenSecrets Moves To 'Open Data' Model

More welcome transparency moves in the US:

Campaign finance clearinghouse OpenSecrets.org, which is run by the nonpartisan Center for Responsive Politics, is going "open data" next week, according to an e-mail circulated by the center on Thursday.

...

CRP is expecting all sorts of data mash-ups, maps and other cool projects to result from the new capability. Transparency group the Sunlight Foundation helped fund OpenSecrets.org's OpenData initiative to make millions of records available under a Creative Commons "Attribution-Noncommercial-Share Alike" license. CRP will continue to offer its data to commercial users for a fee.

Follow me on Twitter @glynmoody

*Truly* Open Education

Here's some brilliant out-of-the-box thinking by Tony Hirst on how higher education should really be done:

imagine this: when you start your degree, you sign up to the 100 quid a year subscription plan (maybe with subscription waiver while you’re still an undergrad). When you leave, you get a monthly degree top-up. Nothing too onerous, just a current awareness news bundle made up from content related to the undergrad courses you took. This content could be produced as a side effect of keeping currently taught courses current: as a lecturer updates their notes from one year to the next, the new slide becomes the basis for the top-up news item. Or you just tag each course, and then pass on a news story or two discovered using that tag (Martin, you wanted a use for the Guardian API?!;-)

Having the subscription in place means you get 100 quid a year per alumni, without having to do too much at all…and as I suspect we all know, and maybe most of us bear testament to, once the direct debit is in place, there can be quite a lot of inertia involved in stopping it…

But there’s more - because you also have an agreement with the alumni to send them stuff once a month (and as a result maybe keep the alumni contacts database up to date a little more reliably?). Like the top-up content that is keeping their degree current (err….? yeah, right…)…

…and adverts… adverts for proper top-up/CPD courses, maybe, that they can pay to take…

…or maybe they can get these CPD courses for “free” with the 1000 quid a year, all you can learn from, top-up your degree content plan (access to subscription content and library services extra…)

Or how about premium “perpetual degree” plans, that get you a monthly degree top-up and the right to attend one workshop a year “for free” (with extra workshops available at cost, plus overheads;-)

Quick: put this man in charge of the music industry....

Follow me on Twitter @glynmoody

Proof That Some Lawyers *Do* Get It

Not just right, but very well put:

I often explain to clients and prospective clients that the main reward for a great, original product is a successful business based on that product. Intellectual property notwithstanding, the best way to protect most great ideas is by consistently excellent execution, high quality, responsive customer service, continued innovation and overall staying ahead of the competition by delivering more value. Absent the rare circumstance of an entire industry dedicated to counterfeits, à la Vuitton, if an enterprise can’t fathom protecting its value proposition without some kind of gaudy trademark protection, ultimately something has to give.

Fender, according to the record in this opinion, understood the truth well for decades. It warned consumers to stick with its quality “originals” and not to be fooled by “cheap imitations,” and it flourished. But for all these years, Fender never claimed to think the sincerest form of flattery was against the law. Only in the feverish IP-crazy atmosphere of our current century did the company deem it “necessary” to spend a fortune that could have been used on product development, marketing or any darned thing on a quixotic quest for a trademark it never believed in itself. That is more than an impossible dream — it’s a crying shame.

(Via Luis Villa's blog.)

Follow me on Twitter @glynmoody

French "Three Strikes" Law Unexpectedly Thrown Out

In an incredible turn of events, the French HADOPI legislation, which seemed certain to become law, has been thrown out:

French lawmakers have unexpectedly rejected a bill that would have cut off the Internet connections of people who illegally download music or films.

On Open Enterprise blog.

Follow me on Twitter @glynmoody

Should an Open Source Licence Ever Be Patent-Agnostic?

Sharing lies at the heart of free software, and drives much of its incredible efficiency as a development methodology. It means that coders do not have to re-invent the wheel, but can borrow from pre-existing programs. Software patents, despite their name, are about locking down knowledge so that it cannot be shared without permission (and usually payment). But are there ever circumstances when software patents that require payment might be permitted by an open source licence? That's the question posed by a new licence that is being submitted to the Open Source Inititative (OSI) for review.

On Linux Journal.

Follow me on Twitter @glynmoody

08 April 2009

Time to Get Rid of ICANN

ICANN has always been something of a disaster area, showing scant understanding of what the Internet really is, contemptuous of its users, and largely indifferent to ICANN's responsibilities as guardian of a key part of its infrastructure. Here's the latest proof that ICANN is not fit for its purpose:


The familiar .com, .net, .org and 18 other suffixes — officially "generic top-level domains" — could be joined by a seemingly endless stream of new ones next year under a landmark change approved last summer by the Internet Corp. for Assigned Names and Numbers, the entity that oversees the Web's address system.

Tourists might find information about the Liberty Bell, for example, at a site ending in .philly. A rapper might apply for a Web address ending in .hiphop.

"Whatever is open to the imagination can be applied for," says Paul Levins, ICANN's vice president of corporate affairs. "It could translate into one of the largest marketing and branding opportunities in history."

Got that? This change is purely about "marketing and branding opportunities"...the fact that it will fragment the Internet, sow confusion among hundreds of millions of users everywhere, and lead to the biggest explosion of speculative domain squatting and hoarding by parasites who see the Internet purely as a system to be gamed, is apparently a matter of supreme indifference to those behind ICANN: the main thing is that it's a juicy business opportunity.

Time to sack the lot, and put control of the domain name system where it belongs: in the hands of engineers who care.

Follow me on Twitter @glynmoody

Second Life + Moodle = Sloodle

Moodle is one of open source's greatest success stories. It's variously described as an e-learning or course management system. Now, given that education is also one of the most popular applications of Second Life, it would be a natural fit somehow to meld Moodle and Second Life. Enter Sloodle, whose latest version has just been released:

Version 0.4 integrates Second Life 3D classrooms with Moodle, the world’s most popular open source e-learning system with over 30 million users (http://www.moodle.org). This latest release allows teachers and students to prepare materials in an easy-to-use, web-based environment and then log into Second Life to put on lectures and student presentations using their avatars.

The new tools also let students send images from inside Second Life directly to their classroom blog. Students are finding this very useful during scavenger hunt exercises where teachers send them to find interesting content and bring it back to report to their classmates.

Tools that cross the web/3D divide are becoming more popular as institutions want to focus on the learning content rather than the technical overhead involved in orienting students into 3D settings and avatars.

As an open-source platform SLOODLE is both freely available and easily enhanced and adapted to suit the needs of diverse student populations. And web hosts are lining up to support the platform. A number of third-party web hosts now offer Moodle hosting with SLOODLE installed either on request or as standard, making easier than ever to get started with SLOODLE.

SLOODLE is funded and supported by Eduserv - http://www.eduserv.ac.uk/ and is completely free for use under the GNU GPL license.

The project was founded by Jeremy Kemp of San José State University, California, and Dr. Daniel Livingstone of the University of the West of Scotland, UK.

Follow me on Twitter @glynmoody

Forward to the Past with Forrester

Looking at Forrester's latest report on open source , I came across the following:

The bottom line is that in most application development shops, the use of open source software has been a low-level tactic instead of a strategic decision made by informed executives. As a result, while there’s executive awareness of the capital expenditure advantages of adopting OSS, other benefits, potential risks, and the structural changes required to take full advantage of OSS are still poorly understood.

On Open Enterprise blog.

Follow me on Twitter @glynmoody

Open Access as "Middleware"

Interesting simile:

A "legacy system" in the world of computing provides a useful analogy for understanding the precarious state of contemporary academic publishing. This comparison might also keep us from stepping backward in the very act of stepping forward in promoting Open Access publishing and Institutional Repositories. I will argue that, vital as it is, the Open Access movement should really be seen in its current manifestation as academic "middleware" servicing the "legacy system" of old-school scholarship.

(Via Open Access News.)

Follow me on Twitter @glynmoody

07 April 2009

OpenStreetMap Navigates to Wikipedia

One of the powerful features of open source is re-use: you don't have to re-invent the wheel, but can build on the work of others. That's straightforward enough for software, but it can also be applied to other fields of openness. Here's a fantastic example: embedding OpenStreetMap in Wikipedia entries:


For some time, there have been efforts to bring OpenStreetMap (OSM) and Wikipedia closer together. Both projects have the mission to produce free knowledge and information through a collaborative community process. Because of the similarities, there are many users active in both projects – however mutual integration is still lacking.

For this reason, Wikimedia Deutschland (WM-DE, the German Chapter of Wikimedia) are providing funds of 15.000 Euro (almost $20k) and starting a corresponding pilot project. A group of interested Wikipedians and OSM users have partnered up to reach two goals: The integration of OSM-maps in Wikipedia and the installation of a map toolserver. The map toolserver will serve to prototype new mapping-related projects and preparing them for deployment on the main Wikimedia cluster.

Here's how it will work:

Maps are an important part of the information in encyclopaedic articles - however currently mostly static maps are used. With interactive free maps and a marking system a way of presenting information can be created.

For some time there have been MediaWiki Extensions available for embedding OpenStreetMap maps into MediaWiki. That's a great start, but it isn't enough. If these extensions were deployed on Wikipedia without any kind of proxy set-up, the OpenStreetMap tile servers would struggle to handle the traffic.

One of our aims is to build an infrastructure in the Wikimedia projects that allows us to keep the OSM data, or at least the tile images, ready locally in the Wikimedia network. We still have to gain a some experience about this, but we are optimistic about that. On one side, we have a number of Wikipedians in the team, who are versed in MediaWiki and scaling software systems, and on the other side we have OSM users who can set up the necessary geo database.

We learned much from the use of the Wikimedia-Toolservers – for example that on a platform for experimenting much more useful tools were developed than it was predicted. Interested developers have a good starting position to develop new tools with new possibilities.

We expect similar results from the map toolserver. As soon as it is online, everyone who is interested and presents his ideas of development projects and his state of knowledge can apply for an account. We want to allow as many users as possible to implement their ideas without having to care about the basic setup. We hope that in the spirit of the creation and distribution of free content many new maps and visualisations emerge.

Now, it's happening:

There has been rapid progress on the subject of adding OpenStreetMap maps to Wikimedia projects (e.g. Wikipedia) during the MediaWiki Developer Meet-Up taking place right now in Berlin.

Maps linked to Wikipedia content *within* Wikipedia: I can't wait.

Follow me on Twitter @glynmoody

Transparency and Open Government

Not my words, but those of that nice Mr Obama:


My Administration is committed to creating an unprecedented level of openness in Government. We will work together to ensure the public trust and establish a system of transparency, public participation, and collaboration. Openness will strengthen our democracy and promote efficiency and effectiveness in Government.

Wow.

Specifically:

Government should be transparent. Transparency promotes accountability and provides information for citizens about what their Government is doing.

...

Government should be participatory. Public engagement enhances the Government's effectiveness and improves the quality of its decisions.

...

Government should be collaborative. Collaboration actively engages Americans in the work of their Government.

Read the whole thing - and weep for poor old, locked-up UK....

Follow me on Twitter @glynmoody

Google and Microsoft Agree: This is Serious

You know things are bad when a coalition includes Google and Microsoft agreeing on something...

On Open Enterprise blog.

Follow me on Twitter @glynmoody

RFCs: Request for Openness

There's a fascinating history of the RFCs in the New York Times, written by a person who was there at the beginning:

Our intent was only to encourage others to chime in, but I worried we might sound as though we were making official decisions or asserting authority. In my mind, I was inciting the wrath of some prestigious professor at some phantom East Coast establishment. I was actually losing sleep over the whole thing, and when I finally tackled my first memo, which dealt with basic communication between two computers, it was in the wee hours of the morning. I had to work in a bathroom so as not to disturb the friends I was staying with, who were all asleep.

Still fearful of sounding presumptuous, I labeled the note a “Request for Comments.” R.F.C. 1, written 40 years ago today, left many questions unanswered, and soon became obsolete. But the R.F.C.’s themselves took root and flourished. They became the formal method of publishing Internet protocol standards, and today there are more than 5,000, all readily available online.

For me, most interesting comments are the following:

The early R.F.C.’s ranged from grand visions to mundane details, although the latter quickly became the most common. Less important than the content of those first documents was that they were available free of charge and anyone could write one. Instead of authority-based decision-making, we relied on a process we called “rough consensus and running code.” Everyone was welcome to propose ideas, and if enough people liked it and used it, the design became a standard.

After all, everyone understood there was a practical value in choosing to do the same task in the same way. For example, if we wanted to move a file from one machine to another, and if you were to design the process one way, and I was to design it another, then anyone who wanted to talk to both of us would have to employ two distinct ways of doing the same thing. So there was plenty of natural pressure to avoid such hassles. It probably helped that in those days we avoided patents and other restrictions; without any financial incentive to control the protocols, it was much easier to reach agreement.

This was the ultimate in openness in technical design and that culture of open processes was essential in enabling the Internet to grow and evolve as spectacularly as it has. In fact, we probably wouldn’t have the Web without it. When CERN physicists wanted to publish a lot of information in a way that people could easily get to it and add to it, they simply built and tested their ideas. Because of the groundwork we’d laid in the R.F.C.’s, they did not have to ask permission, or make any changes to the core operations of the Internet. Others soon copied them — hundreds of thousands of computer users, then hundreds of millions, creating and sharing content and technology. That’s the Web.

I think this is right: the RFCs are predicated on complete openness, where anyone can make suggestions and comments. The Web built on that basis, extending the possibility of openness to everyone on the Internet. In the face of attempts to kill net neutrality in Europe, it's something we should be fighting for.

Follow me on Twitter @glynmoody

06 April 2009

The Latest Act in the ACTA Farce

I think the Anti-Counterfeiting Trade Agreement(ACTA) will prove something of a watershed in the negotiations of treaties. We have already gone from a situation where governments around the world have all-but denied the thing existed, to the point where the same people are now scrambling to create some semblance of openness without actually revealing too much.

Here's the latest attempt, which comes from the US team:

A variety of groups have shown their interest in getting more information on the substance of the negotiations and have requested that the draft text be disclosed. However, it is accepted practice during trade negotiations among sovereign states to not share negotiating texts with the public at large, particularly at earlier stages of the negotiation. This allows delegations to exchange views in confidence facilitating the negotiation and compromise that are necessary in order to reach agreement on complex issues. At this point in time, ACTA delegations are still discussing various proposals for the different elements that may ultimately be included in the agreement. A comprehensive set of proposals for the text of the agreement does not yet exist.

This is rather amusing. On the one hand, the negotiators have to pretend that "a comprehensive set of proposals for the text of the agreement does not yet exist", so that we can't find out the details; on the other, they want to finish off negotiations as quickly as possible, so as to prevent too many leaks. Of course, they can't really have it both ways, which is leading to this rather grotesque dance of the seven veils, whereby bits and pieces are revealed in an attempt to keep us quiet in the meantime.

The latest summary does contain some interesting background details that I'd not come across before:

In 2006, Japan and the United States launched the idea of a new plurilateral treaty to help in the fight against counterfeiting and piracy, the so-called Anti-Counterfeiting Trade Agreement (ACTA). The aim of the initiative was to bring together those countries, both developed and developing, that are interested in fighting counterfeiting and piracy, and to negotiate an agreement that enhances international co-operation and contains effective international standards for enforcing intellectual property rights.

Preliminary talks about such an anti-counterfeiting trade agreement took place throughout 2006 and 2007 among an initial group of interested parties (Canada, the European Commission, Japan, Switzerland and the United States). Negotiations started in June 2008 with the participation of a broader group of participants (Australia, Canada, the European Union and its 27 member states, Japan, Mexico, Morocco, New Zealand, Republic of Korea, Singapore, Switzerland and the United States).

The rest, unfortunately, is the usual mixture of half-truths and outright fibs. But this constant trickle of such documents shows that they are taking notice of us, and that we must up the pressure for full disclosure of what exactly is being negotiated in our name.

Follow me on Twitter @glynmoody

A Different Kind of Wörterbuch

Linguee seems to offer an interesting twist on a boring area - bilingual dictionaries:

With Linguee, you can search for words and expressions in many millions of bilingual texts in English and German. Every expression is accompanied by useful additional information and suitable example sentences.

...

When you translate texts to a foreign language, you usually look for common phrases rather than translations of single words. With its intelligent search and the significantly larger amount of stored text content, Linguee is the right tool for this task. You find:

* In what context a translation is used
* How frequent a particular translation is
* Example sentences: How have other people translated an expression?

By searching not only for a single word, but for a respective word in its context, you can easily find a translation that fits optimal in context. With its large number of entries, Linguee often retrieves translations of rare terms that you don't find anywhere else.

There two other points of interest. The source of the texts:

Our most important source is the bilingual web. Other valuable sources include EU documents and patent specifications.

And the fact that a "GPL version of the Linguee dictionary" is available.

Follow me on Twitter @glynmoody