Showing posts with label open knowledge. Show all posts
Showing posts with label open knowledge. Show all posts

26 March 2009

Open Knowledge Conference (OKCon) 2009

If open knowledge is your thing, London is the place, Saturday the time:

The Open Knowledge Conference (OKCon) is back for its fourth installment bringing together individuals and groups from across the open knowledge spectrum for a day of discussions workshops.

This year the event will feature dedicated sessions on open knowledge and development and on the semantic web and open data. Plus there's the usual substantial allocation of 'Open Space' -- sessions, workshops and discussions proposed either via the CFP or on the day
.
Follow me on Twitter @glynmoody

25 June 2007

Open Textbook Joins the Family

Today we are pleased to announce the launch of http://www.opentextbook.org/, a place to list and keep track of news about textbooks that are open in accordance with the Open Knowledge Definition — i.e. free to use, reuse, and redistribute. We welcome participation in the project and if anyone has a textbook or notes they’d like to see listed or would like to be a contributor to the site please head on over to http://www.opentextbook.org/.

More here.

Of Open Knowledge and Closed Minds

Extraordinary:

US university students will not be able to work late at the campus, travel abroad, show interest in their colleagues' work, have friends outside the United States, engage in independent research, or make extra money without the prior consent of the authorities, according to a set of guidelines given to administrators by the FBI.

Better shut down that pesky Internet thingy while you're at it - who knows what knowledge may be seeping out through it? (Via The Inquirer.)

30 April 2007

Of Modules, Atoms and Packages

I commented before that I thought Rufus Pollock's use of the term "atomisation" in the context of open content didn't quite capture what he was after, so I was pleased to find that he's done some more work on the concept and come up with the following interesting refinements:

Atomization

Atomization denotes the breaking down of a resource such as a piece of software or collection of data into smaller parts (though the word atomic connotes irreducibility it is never clear what the exact irreducible, or optimal, size for a given part is). For example a given software application may be divided up into several components or libraries. Atomization can happen on many levels.

At a very low level when writing software we break thinks down into functions and classes, into different files (modules) and even group together different files. Similarly when creating a dataset in a database we divide things into columns, tables, and groups of inter-related tables.

But such divisions are only visible to the members of that specific project. Anyone else has to get the entire application or entire database to use one particular part of it. Furthermore anyone working on any given part of one of the application or database needs to be aware of, and interact with, anyone else working on it — decentralization is impossible or extremely limited.

Thus, atomization at such a low level is not what we are really concerned with, instead it is with atomization into Packages:

Packaging

By packaging we mean the process by which a resource is made reusable by the addition of an external interface. The package is therefore the logical unit of distribution and reuse and it is only with packaging that the full power of atomization’s “divide and conquer” comes into play — without it there is still tight coupling between different parts of a given set of resources.

Developing packages is a non-trivial exercise precisely because developing good stable interfaces (usually in the form of a code or knowledge API) is hard. One way to manage this need to provide stability but still remain flexible in terms of future development is to employ versioning. By versioning the package and providing ‘releases’ those who reuse the packaged resource can use a specific (and stable) release while development and changes are made in the ‘trunk’ and become available in later releases. This practice of versioning and releasing is already ubiquitous in software development — so ubiquitous it is practically taken for granted — but is almost unknown in the area of knowledge.

Tricky stuff, but I'm sure it will be worth the effort if the end-result is a practical system for modularisation, since this will allow open content to enjoy many of the evident advantages of open code.

19 March 2007

Open Knowledge, Open Greenery and Modularity

On Saturday I attended the Open Knowledge 1.0 meeting, which was highly enjoyable from many points of view. The location was atmospheric: next to Hawksmoor's amazing St Anne's church, which somehow manages the trick of looking bigger than its physical size, inside the old Limehouse Town Hall.

The latter had a wonderfully run-down, almost Dickensian feel to it; it seemed rather appropriate as a gathering place for a ragtag bunch of ne'er-do-wells: geeks, wonks, journos, activists and academics, all with dangerously powerful ideas on their minds, and all more dangerously powerful for coming together in this way.

The organiser, Rufus Pollock, rightly placed open source squarely at the heart of all this, and pretty much rehearsed all the standard stuff this blog has been wittering on about for ages: the importance of Darwinian processes acting on modular elements (although he called the latter atomisation, which seems less precise, since atoms, by definition, cannot be broken up, but modules can, and often need to be for the sake of increased efficiency.)

One of the highlights of the day for me was a talk by Tim Hubbard, leader of the Human Genome Analysis Group at the Sanger Institute. I'd read a lot of his papers when writing Digital Code of Life, and it was good to hear him run through pretty much the same parallels between open genomics and the other opens that I've made and make. But he added a nice twist towards the end of his presentation, where he suggested that things like the doomed NHS IT programme might be saved by the use of Darwinian competition between rival approaches, each created by local NHS groups.

The importance of the ability to plug into Darwinian dynamics also struck me when I read this piece by Jamais Cascio about carbon labelling:

In order for any carbon labeling endeavor to work -- in order for it to usefully make the invisible visible -- it needs to offer a way for people to understand the impact of their choices. This could be as simple as a "recommended daily allowance" of food-related carbon, a target amount that a good green consumer should try to treat as a ceiling. This daily allowance doesn't need to be a mandatory quota, just a point of comparison, making individual food choices more meaningful.

...

This is a pattern we're likely to see again and again as we move into the new world of carbon footprint awareness. We'll need to know the granular results of actions, in as immediate a form as possible, as well as our own broader, longer-term targets and averages.

Another way of putting this is that for these kind of ecological projects to work, there needs to be a feedback mechanism so that people can see the results of their actions, and then change their behaviour as a result. This is exactly like open source: the reason the open methodology works so well is that a Darwinian winnowing can be applied to select the best code/content/ideas/whatever. But that is only possible when there are appropriate metrics that allow you to judge which actions are better, a reference point of the kind Cascio is writing about.

By analogy, we might call this particular kind of environmental action open greenery. It's interesting to see that here, too, the basic requirement of modularity turns out to be crucially important. In this case, the modularity is at the level of the individual's actions. This means that we can learn from other people's individual success, and improve the overall efficacy of the actions we undertake.

Without that modularity - call its closed-source greenery - everything is imposed from above, without explanation or the possibility of local, personal, incremental improvement. That may have worked in the 20th century, but given the lessons we have learned from open source, it's clearly not the best way.

06 November 2006

Why Open Knowledge Will Ultimately Beat Closed

Because:

each user of the knowledge pool becomes a contributor back to the pool. As the pool grows it is ever more attractive to new users so they use (and contribute) to it rather than to any competing closed set of knowledge. This results in a strong positive feedback mechanism.

06 September 2006

15 July 2006

The Value of the Public Domain

More light reading - this time about the public domain. Or rather, a little beyond the traditional public domain, as the author Rufus Pollock states:

Traditionally, the public domain has been defined as the set of intellectual works that can be copied, used and reused without restriction of any kind. For the purposes of this essay I wish to widen this a little and make the public domain synonymous with ‘open’ knowledge, that is, all ideas and information that can be freely used, redistributed and reused. The word ‘freely’ must be loosely interpreted – for example the requirement of attribution or even that derivative works be re-shared, does not render a work unfree.

It's quite academic in tone, but has some useful references (even if it misses out a crucial one - not that I'm bitter...).

10 May 2006

Open Knowledge Development

The Open Knowledge Foundation has some thoughts on the principles of open knowledge develoment:

Open knowledge means porting much more of the open source stack than just the idea of open licensing. It is about porting many of the processes and tools that attach to the open development process — the process enabled by the use of an open approach to knowledge production and distribution.

03 May 2006

What is Open Knowledge?

If you were wondering, then perhaps the Open Knowledge Foundation might be able to help. They have come up with an Open Knowledge Definition (they actually call it The Open Knowledge Defiition, but that seems a tad ambitious). The full half-hour argument is here.

Some wise words from the introduction:

The concept of openness has already started to spread rapidly beyond its original roots in academia and software. We already have 'open access' journals, open genetics, open geodata, open content etc. As the concept spreads so we are seeing a proliferation of licenses and a potential blurring of what is open and what is not.

Well, that sounds familiar.