22 March 2007

US and NATO Declare War on Net Neutrality

Here's a very stupid idea in the making:

Representatives of the US government have demanded that the Internet Engineering Task Force (IETF) come up with a solution for prioritizing certain data within government networks and at the interfaces to other networks. Representatives of the US Department of Defense and of the National Communications System (NCS), which is part of the Department of Homeland Security, are seeking to ensure that certain items of information can even in an emergency be guaranteed to arrive. This presupposes appropriate identification mechanisms in the servers. At the IETF meeting in Prague Antonio Desimone of the US Department of Defense said that the switch to a "global grid" raised a number of issues, such as how delivery of a specific e-mail could be ensured within a defined period of time. What was needed was a prioritizing of data, one that also took in emergency and catastrophe scenarios.

"Some calls are more important than other calls, some chats more important than others or a certain content within a chat session may have priority," Mr. Desimone explained.

Why's it stupid? Well, it essentially kills net neutrality, and at the behest of the soldiers. If they want their own super-duper networks, let them build it, rather than attempt to steal the toys everyone else is sharing. And another reason this is asking for trouble is the following:

He said he was especially worried that prioritization might in reality not be confined to authorized persons. Should confinement fail script kids and hackers might find ways to use "priority bits" for their purposes, he observed.

"Might find ways"? Might???

OpenDocument Format: the Monograph

One of the great things about all things open, is that their documentation is nearly always freely available. A case in point is this monograph on ODF, which can be downloaded in its entirety, or chapter-by-chapter. It's all about choice.... (Via GotzeBlogged.)

21 March 2007

Virtual Dosh: A Taxing Question

Here's the latest contribution to the (academic) debate about whether and in what circumstances virtual dosh should be taxed:

Although it seems intuitively the case that the person who auctions virtual property online for a living should be taxed on his or her earnings, or even that the player who occasionally sells a valuable item for real money should be taxed on the profits of those sales, what of the player who only accumulates items or virtual currency within a virtual world? Should the person whose avatar7 discovers or wins an item of value be taxed on the value of that item? And should a person who trades an item in-game with another player (for an item or virtual currency) be taxed on any increase in value of the item relinquished?

(Via Terra Nova.)

We Forbid You to Mention This Post

Well, this is going to work, isn't it?


Malaysia's traditional media has been ordered not to mention, quote or pursue stories exposed by bloggers and online news sites, which are emerging as a powerful new media force.

A security ministry circular dated March 13 told top editors of a dozen mainstream newspapers and five television stations that they must not "give any consideration whatsoever" to anti-government material posted online.

Ironically the circular, issued by the ministry's secretary general, was first exposed by the independent online magazine Malaysiakini.com on Saturday.

Further proof of the power - and importance - of blogs, especially in countries with a supine press. Come to think of it, they're also pretty important in countries with even a mostly-supine press - as in, everywhere. (Via Smart Mobs.)

Fresh Thoughts on DRM

One of the problems with the DRM battle is that it tends to get into a rut: the same old arguments for and against are trotted out. For those of us who care, it's a necessary price to pay for telling it as it is, but for onlookers, it's just plain boring.

That's what makes this piece, which reports on the recent conference "Copyright, DRM Technologies, and Consumer Protection", at UC Berkeley, quite simply the most interesting writing on DRM that I've come across for ages: as well as explaining the old arguments well, it includes a couple of new thoughts:

One good point a few panelists made is that successful DRM is likely to weaken the user's privacy. All DRM prevents computers and media devices from sharing files freely with each other. But in order to merely curb freedom, rather than end it entirely, DRM must identify which files can be shared and which can't, and which methods of sharing are permissible. The more sophisticated this process of determination becomes, the more it is necessary for devices to analyze information about the files in complex ways. The burden of this analysis will often be too great to implement in typical consumer electronics — so instead the data will be sent to an online server, which will figure out your rights and tell the client device what to do. But step back and consider where this is going: devices all over your house, sending information about your viewing and listening habits to a central server. Is this data certain to be subpoena-able someday? You bet. It probably already is.

Another point (made by Peter Swire among others) was the computer security implications of running DRM. The code in a DRM system must be a black box: it cannot be open source, because if the user could understand and change it, she could disable it and copy her files without restriction. But if the code is opaque, it cannot be examined for security flaws — and in fact, the Digital Millennium Copyright Act makes it illegal to even attempt such an examination in most circumstances. Basically, you have to run this code, for even if you are technically capable of modifying it, doing so would be illegal. (In response to this situation, Jim Blandy proposed a new slogan: "It's my computer, damn it!")

I believe that now is a critical moment in the fight against DRM: if we don't scotch the snake soon, it will turn into a hydra. To win, we need to convince "ordinary" people that DRM is mad, bad and dangerous to use; the points raised above could well prove important additions to the anti-DRM armoury.

Learning about Open Educational Resources

Major European studies on open source are two a penny these days (and that's good), but some of the other opens have yet to achieve this level of recognition. So the appearance of major EU report on Open Educational Resources from the Open e-Learning Content Observatory Services (OLCOS) project is particularly welcome.

At present a world-wide movement is developing which promotes unencumbered open access to digital resources such as content and software-based tools to be used as a means of promoting education and lifelong learning. This movement forms part of a broader wave of initiatives that actively promote the “Commons” such as natural resources, public spaces, cultural heritage and access to knowledge that are understood to be part of, and to be preserved for, the common good of society. (cf. Tomales Bay Institute, 2006)

With reference to the Open Educational Resources (OER) movement, the William and Flora Hewlett Foundation justifies their investment in OER as follows: “At the heart of the movement toward Open Educational Resources is the simple and powerful idea that the world’s knowledge is a public good and that technology in general and the Worldwide Web in particular provide an extraordinary opportunity for everyone to share, use, and re-use knowledge. OER are the parts of that knowledge that comprise the fundamental components of education – content and tools for teaching, learning and research.”

Since the beginning of 2006, the Open e-Learning Content Observatory Services (OLCOS) project has explored how Open Educational Resources (OER) can make a difference in teaching and learning. Our initial findings show that OER do play an important role in teaching and learning, but that it is crucial to also promote innovation and change in educational practices. The resources we are talking about are seen only as a means to an end, and are utilised to help people acquire the competences, knowledge and skills needed to participate successfully within the political, economic, social and cultural realms of society.

Despite its title, it covers a very wide area, including open courseware, open access and even open source. It's probably the best single introduction to open educational resources around today - and it's free, as it should be. (Via Open Access News.)

Open Rights Group Needs You!

Well, it does if you are Brit caricaturist, parodist, pasticheur or general masher-upper:


The Patent Office is charged with implementing the exciting recommendations suggested in the recent Gowers Review of IP. But they are yet to be convinced of the crucial need for some of these recommendations, mainly because they’re finding it hard to get in touch with the relevant practioners. They are looking for concrete examples of creative practices inhibited by the law, to back up proposed exceptions for the purposes of “creative, transformative or derivative works” and “caricature, parody or pastiche”.

Would you, your colleagues, students or collaborators benefit from these exceptions? Are you working or have you worked on a project outlawed by the overly-protectionst copyright regime, which would have benefited from these kinds of exceptions? If so, please get in touch - info[at]openrightsgroup.org - and share your experience.

Three Cheers for MIT

Here's an interesting tale that highlights the absurdity of DRM in the context of scientific publishing - not a sphere where you normally expect to encounter it:

The MIT Libraries have canceled access to the Society of Automotive Engineers’ web-based database of technical papers, rejecting the SAE’s requirement that MIT accept the imposition of Digital Rights Management (DRM) technology.

...

When informed that the SAE feels the need to impose DRM to protect their intellectual property, Professor John Heywood, the Director of MIT’s Sloan Automotive Lab, who publishes his own work with the SAE, responded with a question: “Their intellectual property?” He commented that increasingly strict and limiting restrictions on use of papers that are offered to publishers for free is causing faculty to become less willing to “give it all away” when they publish.

Echoing Professor Heywood, Alan Epstein, Professor of Aeronautics and Astronautics, believes that “If SAE limits exposure to their material and makes it difficult for people to get it, faculty will choose to publish elsewhere.” He noted that “SAE is a not-for-profit organization and should be in this for the long term,” rather than imposing high prices and heavy restrictions to maximize short-term profit.

As this makes clear, the SAE is attempting to protect an intellectual monopoly it has on other people's work by imposing DRM, which adds insult to injury. Let's hope more institutions can follow MIT's fine example, and nip this DRM madness in the bud. (Via Open Access News.)

20 March 2007

Absolutely Criminal

This is turning into a complete disaster:

Nicola Zingaretti, the EU parliament's rapporteur for the EU directive on the planned penal regulations for the enforcement of intellectual property rights, has proposed that the mere "acceptance" of such violations be made a crime. The Italian Social-Democrat proposed that this vague term be included as part of amendments orally proposed as a "compromise" at the last minute to the Committee on Legal Affairs, which will be voting on the matter today. The FFII, a German organization for free information infrastructure, has called this proposed amendment a "broad concept of secondary liability" for "intentional" violations of copyright, patent, and trademarks. The FFII says that the proposal goes far beyond the much criticized original proposal made by the EU Commission to criminalize "inciting, aiding and abetting" legal violations.

To see why this legislation should be dropped completely, try replacing "enforcement of intellectual property rights" by "enforcement of intellectual monopolies": doesn't sound so good, eh?

A Lot of Copyright Whatnot

A superb example of how cavalier proponents of intellectual monopolies can be with figures:

Leaving aside the rhetoric, what is particularly remarkable about these comments is the claim that Canadian copyright law is costing the economy between $10 to $30 billion per year. Obviously any estimate that varies by up to $20 billion is not particularly credible. Further, even the low end figure looks ridiculous as it is four times the losses claimed by the MPAA in China and is more than three times the total amount of cultural goods that Canada imports from the U.S. every year. Or considered another way, the $10 billion figure is more than the Finance Minister committed yesterday to new health care initiatives, the environment, education, and special services for armed forces veterans combined. And that is the low end - the $30 billion figure represents nearly 13 percent of total government revenues and nearly equals the total amount of provincial transfers and subsidies. All of this from "a lot of counterfeiting of movies and songs and whatnot?"

ODF Now Free (as in Beer)

There is a certain irony in the fact that the OpenDocument Format, that essence of office suite freedom, has been locked up as a proprietary document costing the princely sum of 342 Swiss Francs.

Well, it's now been liberated - at least in the beery sense. You still can't do anything daringly with it, like change it; but since it is meant to be a standard, I suppose that's not totally unreasonable.

Be warned, though: its 728 pages are not for the those of a delicate disposition (but are, at least, better than the 6000 pages from Microsoft's rival OOXML offering.) (Via Rob Weir.)

Tragedy of the Water Commons

Sigh.

Some of the world's major rivers are reaching crisis point because of dams, shipping, pollution and climate change, according to the environment group WWF.

Its report, World's Top 10 Rivers at Risk, says the river "crisis" rivals climate change in importance.

In the (Marketing) Belly of the Beast

It's always a good idea to try to understand how Microsoft regards the world of free software, and there's no better way of doing that than reading its own materials aimed at beating open source. Here's a good example, called Linux Personas, which presents various kinds of GNU/Linux users and how to win them back to Windows.

Perhaps the most interesting category is the Linux Aficionado - hard-core open source geek, in other words. The two key approaches are the usual tired TCO studies - a pretty forlorn hope given the extent to which they have been debunked - and an argument based on the strength of Windows' integrated platform.

The latter has always truck me as one of the better points, since it is (currently) a key differentiator for Microsoft. I still don't see geeks going for it (their senior managers might, though). What's more important in this context, perhaps, is the rise of the open source stack, which effectively is building a counter-argument to this. (Via Slashdot.)

19 March 2007

Murdock Joins Sun: Watch Out GNU/Linux

I've noted in a number of places the impressive and continuing rise of Sun to become pretty much the leading defender of the GNU GPL faith. Anyone who had any doubts about its ultimate intentions might like to read this post from Ian Murdock, the -ian in Debian, and one of the senior figures in the GNU/Linux world:

I’m excited to announce that, as of today, I’m joining Sun to head up operating system platform strategy. I’m not saying much about what I’ll be doing yet, but you can probably guess from my background and earlier writings that I’ll be advocating that Solaris needs to close the usability gap with Linux to be competitive; that while as I believe Solaris needs to change in some ways, I also believe deeply in the importance of backward compatibility; and that even with Solaris front and center, I’m pretty strongly of the opinion that Linux needs to play a clearer role in the platform strategy.

Watch out little GNU/Linux, there's a big OpenSolaris heading your way....

Open Knowledge, Open Greenery and Modularity

On Saturday I attended the Open Knowledge 1.0 meeting, which was highly enjoyable from many points of view. The location was atmospheric: next to Hawksmoor's amazing St Anne's church, which somehow manages the trick of looking bigger than its physical size, inside the old Limehouse Town Hall.

The latter had a wonderfully run-down, almost Dickensian feel to it; it seemed rather appropriate as a gathering place for a ragtag bunch of ne'er-do-wells: geeks, wonks, journos, activists and academics, all with dangerously powerful ideas on their minds, and all more dangerously powerful for coming together in this way.

The organiser, Rufus Pollock, rightly placed open source squarely at the heart of all this, and pretty much rehearsed all the standard stuff this blog has been wittering on about for ages: the importance of Darwinian processes acting on modular elements (although he called the latter atomisation, which seems less precise, since atoms, by definition, cannot be broken up, but modules can, and often need to be for the sake of increased efficiency.)

One of the highlights of the day for me was a talk by Tim Hubbard, leader of the Human Genome Analysis Group at the Sanger Institute. I'd read a lot of his papers when writing Digital Code of Life, and it was good to hear him run through pretty much the same parallels between open genomics and the other opens that I've made and make. But he added a nice twist towards the end of his presentation, where he suggested that things like the doomed NHS IT programme might be saved by the use of Darwinian competition between rival approaches, each created by local NHS groups.

The importance of the ability to plug into Darwinian dynamics also struck me when I read this piece by Jamais Cascio about carbon labelling:

In order for any carbon labeling endeavor to work -- in order for it to usefully make the invisible visible -- it needs to offer a way for people to understand the impact of their choices. This could be as simple as a "recommended daily allowance" of food-related carbon, a target amount that a good green consumer should try to treat as a ceiling. This daily allowance doesn't need to be a mandatory quota, just a point of comparison, making individual food choices more meaningful.

...

This is a pattern we're likely to see again and again as we move into the new world of carbon footprint awareness. We'll need to know the granular results of actions, in as immediate a form as possible, as well as our own broader, longer-term targets and averages.

Another way of putting this is that for these kind of ecological projects to work, there needs to be a feedback mechanism so that people can see the results of their actions, and then change their behaviour as a result. This is exactly like open source: the reason the open methodology works so well is that a Darwinian winnowing can be applied to select the best code/content/ideas/whatever. But that is only possible when there are appropriate metrics that allow you to judge which actions are better, a reference point of the kind Cascio is writing about.

By analogy, we might call this particular kind of environmental action open greenery. It's interesting to see that here, too, the basic requirement of modularity turns out to be crucially important. In this case, the modularity is at the level of the individual's actions. This means that we can learn from other people's individual success, and improve the overall efficacy of the actions we undertake.

Without that modularity - call its closed-source greenery - everything is imposed from above, without explanation or the possibility of local, personal, incremental improvement. That may have worked in the 20th century, but given the lessons we have learned from open source, it's clearly not the best way.

Which Future for Adobe's Apollo?

I have mixed feelings about Adobe's new Apollo:

Apollo is a cross-OS runtime that allows developers to leverage their existing web development skills (Flash, Flex, HTML, Ajax) to build and deploy desktop RIA’s [Rich Internet Applications].

On the one hand, it has the F-word in there, and as readers of this blog may know, I am totally allergic to Flash. On the other hand, this seems promising:

We spent a considerable amount of time researching a number of HTML rendering engines for use in Apollo. We had four main criteria, all of which WebKit met:

* Open project that we could contribute to
* Proven technology, that web developers and end users are familiar with
* Minimum effect on Apollo runtime size
* Proven ability to run on mobile devices

While the final decision was difficult, we felt that WebKit is the best match for Apollo at this time.

We shall see (now, if only the Delphic oracle were still around....)

16 March 2007

Answer to Life, the Universe, and Everything: 326

Forget about 300, this year's hot number is 326:

On this day, we learn from IBM's attorney, David Marriott that the "mountain of code" SCO's CEO Darl McBride told the world about from 2003 onward ends up being a measly 326 lines of noncopyrightable code that IBM didn't put in Linux anyway.

On the other hand, SCO has infringed all 700,000 lines of IBM's GPL'd code in the Linux kernel.

Goodbye, Darl, it was vaguely fun while it lasted - well, not much actually - but now, it's over. So long, and thanks for all the fish.

But Is It Cricket?

This raises some interesting issues about what exactly copyright covers:

A cricketing website has found what it hopes is an inventive way to bypass copyright laws to show users action from the Cricket World Cup.

Despite the fact that Sky Television has the exclusive rights to broadcast the live action from the West Indies, Cricinfo.com is using computer animation to provide ball-by-ball coverage to non-Sky viewers.

...

Wisden said it had carefully consulted lawyers before going ahead with the simulations in this week's World Cup. "Cricinfo 3D is based on public domain information gathered by our scorers who record a number of factors such as where the ball pitched, the type of shot played and where the ball goes in the field," said a Wisden statement. "That data is then fed as an xml to anyone who has Cricinfo 3D running on their desktops and the software generates an animation based on this data."

The issues is whether the information about the match is in the public domain, and can thus be fed into a simulation, or whether the rights that Sky has bought cover that information in some way.

I'd say not, because you generally can't copyright (or patent) pure information: for intellectual monopolies to be granted, you need to go beyond the facts to add artistic expression in the case of copyright, or non-obvious inventive steps in the case of patents. Cricinfo 3D seems to be a new artistic interpretation of pure data, independent of Sky's own "artistic" images of the game (i.e., the camera shots they take).

Not that intellectual monopolies are known for their strict adherence to the laws of logic....

Mapping Social Networks

Social networks lie at the heart of Web 2.0 - and of the opens. So it is surprising that more hasn't been done to analyse and map the ebb and flow of ideas and influence across these networks.

Here's an interesting solution for enterprises, called Trampoline. There are clear financial benefits for companies if they can understand better how the social networks work within (and without) their walls, so it's a good fit there too.

In a sense, all this stuff is obvious:

We humans spent 200,000 years evolving all kinds of social behaviour for accumulating, filtering and passing on information. We're really good at it. So good we don't even think about it most of the time. However the way we use email, instant messaging, file sharing and so on disrupts these instincts and stops them doing their job. This is why we waste so much time scanning through emails we're not interested in and searching for documents we need.

Trampoline's approach is so refreshingly obvious it seems radical. We've gone right back to the underlying social behaviour and created innovative software that harnesses human instincts instead of disabling them. We describe this process of mirroring social behaviour in software as "sociomimetics".

Trampoline's products leverage the combined intelligence of the whole network to manage and distribute information more efficiently. Individuals get the information they need, unrecognised expertise becomes visible, the enterprise increases the reuse and value of its knowledge assets.

Given the simplicity of the idea, it should be straighforward coming up with open source implementations. And there would be a double hit: a project that was interesting in itself, and also directly applicable to open source collaboration. (Via Vecosys.)

15 March 2007

The Other Open Source Java

Sun's Java is not the only one going open source:

In spite of a deal with the U.S.-based software giant Microsoft, the government pledged Wednesday that it would continue promoting the use of open-source software.

State Minister for Research and Technology Kusmayanto Kadiman said that open-source software would greatly benefit Indonesia's technological development.

QualiPSo: EU OSS and Acronym Madness

This sounds great:

Leading European, Brazilian and Chinese information and communications technology (ICT) players announced today that they have joined forces to launch QualiPSo, a quality platform to foster the development and use of open source software to help their industries in the global race for growth.

The aim of QualiPSo is to help industries and governments fuel innovation and competitiveness in today’s and tomorrow’s global environment by providing the way to use trusted low-cost, flexible open source software to develop innovative and reliable information systems. To meet that goal, QualiPSo will define and implement the technologies, processes and policies to facilitate the development and use of open source software components, with the same level of trust traditionally offered by proprietary software.

Er, yes, and how will it do that?

Developing a long-lasting network of professionals caring for the quality of open source software for enterprise computing. Six Competence Centres – running the collaborative platforms, tools and process developed in this project – will be set up to support the development, deployment and adoption of OSS by private and public Information Systems Departments, large companies, SMEs, end users and ISVs.

Yes, yes, yes, and that will be done how?

Defining methods, development processes, and business models to facilitate the use of open source Software (OSS) by the industry.

Can't they just get stuck in and try it - you know, download, install, give it a go? Anything else?

Developing a new Capability Maturity Model-like approach to assessing the quality of OSS. This model will be discussed with CMM’s originators, the Software Engineering Institute (SEI), with a view to formalising it as an official extension of CMMI.

What? Maturity? What's this got to do with getting people to use the ruddy stuff?

QualiPSo is launched in synergy with Europe’s technology initiatives such as NESSI and Artemis, and will leverage Europe’s existing OSS initiatives such as EDOS, FLOSSWorld (http://flossworld.org/), tOSSad (http://www.tossad.org/) and others. The project will also leverage large OSS communities such as OW2 and Morfeo.

Oh, now I see: all this is just an excuse for more acronym madness. So it's basically just a waste of money, and a missed opportunity to do something practical.

But wait:

QualiPSo is the ever largest Open Source initiative funded by the EC.

OK, make that the biggest waste of money, and biggest missed opportunity yet.

Why couldn't they invest in a few hundred open source start-ups across Europe instead? Or, easier still, simply mandate ODF for all EU government documents? That single act alone would jump-start an entire open source economy in Europe. (Via Open Source Weblog.)

Webly Openness from the Horse's Mouth

Sir Tim has been talking to a bunch of boring politicians. Here's my favourite bit - a neat distillation of why Net neutrality matters:

When, seventeen years ago, I designed the Web, I did not have to ask anyone's permission. The Web, as a new application, rolled out over the existing Internet without any changes to the Internet itself. This is the genius of the design of the Internet, for which I take no credit. Applying the age old wisdom of design with interchangeable parts and separation of concerns, each component of the Internet and the applications that run on top of it are able develop and improve independently. This separation of layers allows simultaneous but autonomous innovation to occur at many levels all at once. One team of engineers can concentrate on developing the best possible wireless data service, while another can learn how to squeeze more and more bits through fiber optic cable. At the same time, application developers such as myself can develop new protocols and services such as voice over IP, instant messaging, and peer-to-peer networks. Because of the open nature of the Internet's design, all of these continue to work well together even as each one is improving itself.

Red Hat Exchange: Apotheosis of the Stack

I've written several times on this blog and elsewhere about the rise of the open source enterprise stack. Its appearance signals both the increasing acceptance of a wide range of open source solutions in business, as well as the growing maturity of those different parts. Essentially, the rise of the stack represents part of a broader move to create an interdependent free software ecosystem.

Red Hat has been active in this area, notably through the acquisition of JBoss, but now it has gone even further with the announcement of its Red Hat Exchange:

Red Hat has worked with customers and partners to develop Red Hat Exchange (RHX), which provides pre-integrated business application software stacks including infrastructure software from Red Hat and business application software from Red Hat partners.

RHX is a single source for research, purchase, online fullfillment and support of open source and other commercial software business application stacks. Through RHX, customers will be able to acquire pre-integrated open source software solutions incorporating infrastructure software from Red Hat and business application software from Red Hat partners. Red Hat will provide a single point of delivery and support for all elements of the software stacks.

Through RHX, Red Hat seeks to reduce the complexity of deploying business applications and support the development of an active ecosystem of commercial open source business application partners. RHX will be available later this year.

It's obviously too early to tell how exactly this will work, and how much success it will have. But it's nonetheless an important signal that the open source enterprise stack and the associated ecosystem that feeds it are rapidly becoming two of the most vibrant ideas in the free software world.

IT's Got to be Local and Open

Nice story in the Guardian today about a local UK health system that works - unlike the massive, doomed, centralised NHS system currently being half-built at vast cost. It makes some important points:


Next week the annual Healthcare Computing conference in Harrogate will buzz with accusations that the national programme has held back progress. There are two reasons behind this charge. First, under the £1bn contracts signed early in the programme, hospitals have to replace their administrative systems which record patients' details with systems from centrally chosen suppliers. As this involves considerable local effort for little benefit, progress is painfully slow. The second problem is the potential threat to confidentiality arising from making records available on a national scale.

Quite: if there is no local benefit, there will be no buy-in, and little progress. Think local, act local, and you get local achievement. The other side is that if you impose a central system, security is correspondingly weaker. Hello, ID card....

Of course, there are many areas where you want to be able to bring together information from local stores for particular purposes. That's still possible - provided you adopt open standards everywhere. Hello, ODF....

14 March 2007

Infoethics, Open Access, ODF and Open Source

Now here's something you might not expect from UNESCO every day:

The Infoethics Survey of Emerging Technologies prepared by the NGO Geneva Net Dialogue at the request of UNESCO aims at providing an outlook to the ethical implications of future communication and information technologies. The report further aims at alerting UNESCO’s Member States and partners to the increasing power and presence of emerging technologies and draws attention to their potential to affect the exercise of basic human rights. Perhaps as its most salient deduction, the study signals that these days all decision makers, developers, the corporate scholar and users are entrusted with a profound responsibility with respect to technological developments and their impact on the future orientation of knowledge societies.

It touches on a rather motley bunch of subjects, including the semantic Web, RFID, biometrics and mesh networking. But along the way it says some sensible things:

One primary goal of infoethics is to extend the public domain of information; that is, to define an illustrative set of knowledge, information, cultural and creative works that should be made available to every person.

Even more surprising, to me at least, was this suggestion:

UNESCO should meanwhile support open standards and protocols that are generated through democratic processes not dominated by large corporations.

The use of OpenDocument Format and other open formats should also be encouraged as they help mitigate lock-in to certain technologies. Other initiatives to consider include pursuing free and open software, as well as the “Roadmap for Open ICT Ecosystems” developed last year.

(Via Heise Online.)