Showing posts with label open content. Show all posts
Showing posts with label open content. Show all posts

26 July 2007

cc Learn

One of the most compelling applications of open content is in the educational sphere. After all, it's crazy for teachers to keep on creating the same content again and again: the whole idea of knowledge is to build on what has been learned. So it's good to see the Creative Commons setting up a new arm aimed specifically at promoting the re-use of materials here. It's called ccLearn: at the moment, there's not much to see there apart from the Open Education Search project, but I'm sure that things will grow quickly - the logic is compelling.

06 July 2007

The Language of Copyright

Even though IANAL, I rather enjoy the intricacies of copyright law. Maybe it's because copyright occupies such a central place for both free software - which depends on it to enforce licences - and for free content, where it's often more of a hindrance than a help. Maybe it's just because I was, am and always will be a mathematician who likes dealing with logical systems; or maybe I'm just sad.

Whatever the case, here's something I've found interesting: a short guide to (US) copyright for linguists.

Why do linguists need to bother about this? Isn’t this what lawyers are for? There probably was a time when individuals involved in scholarly linguistic work, whether functioning as fieldworkers, authors, or editors, didn’t have to concern themselves with such matters, but this is no longer the case. (It is striking—and somewhat embarrassing to me—that the Newman and Ratliff (2001) fieldwork volume, whose preparation began barely a decade ago, doesn’t include a single mention of copyright.) There are numerous reasons why the situation is very different now from before, but let me mention just three. First, copyright protection—what I prefer to call copyright “shackles”—now lasts for any inordinate amount of time, anywhere from 70 to 120 years, as compared with the 28 years that formerly was the norm in the U.S. Second, contrary to what used to be the case, the publishing of academic journals has turned out to be extremely profitable. Putting out journals is less and less a labor of love by dedicated colleagues committed to promoting scholarship in their fields and more and more a money-making enterprise by large often transnational publishers. Nowadays journals and the scholars who publish in them are not necessarily on the same wave length and they often have conflicting interests. Third, and most obvious, the internet presents new threats to traditional publishing while simultaneously providing new opportunities for fast and effective scholarly communication and the commercial exploitation of that scholarship.

The copyright world has changed. Almost daily we discover that the failure of scholars to pay attention to such matters has had serious negative consequences. For example, older classic works in our field that ideally should be an open part of our intellectual legacy turn out to be off limits, and in general copyright restricts our ability to make creative use of previous works, including our own (!). When we fail to pay attention to copyright matters, we inadvertently give up scholarly rights that we would like to have and needn’t have lost, such as the right to post papers on our private websites or the right to duplicate our own papers for students in classes that we are teaching. In the normal course of things, field linguists might not appreciate the relevance of copyright rules to their work, but the fact is that to protect yourself and your scholarly goals and objectives, you really do need to understand basic concepts in copyright law and how it affects you.

(Via Language Log.)

12 June 2007

Do Your SELF a Favour

Interesting:

The SELF Platform aims to be the central platform with high quality educational and training materials about Free Software and Open Standards. It is based on world-class Free Software technologies that permit both reading and publishing free materials, and is driven by a worldwide community.

The SELF Platform will have two main functions. It will be simultaneously a knowledge base and a collaborative production facility: On the one hand, it will provide information, educational and training materials that can be presented in different languages and forms: from course texts, presentations, e-learning programmes and platforms to tutor software, e-books, instructional and educational videos and manuals. On the other hand, it will offer a platform for the evaluation, adaptation, creation and translation of these materials. The production process of such materials will be based on the organisational model of Wikipedia.

(Via Creative Commons.)

06 June 2007

Open Cities Toronto 2007

Open Cities Toronto 2007 is a weekend-long web of conversation and celebration that asks: how do we collaboratively add more open to the urban landscape we share? What happens when people working on open source, public space, open content, mash up art, and open business work together? How do we make Toronto a magnet for people playing with the open meme?

Sounds my sort of place. (Via Boing Boing.)

30 April 2007

Of Modules, Atoms and Packages

I commented before that I thought Rufus Pollock's use of the term "atomisation" in the context of open content didn't quite capture what he was after, so I was pleased to find that he's done some more work on the concept and come up with the following interesting refinements:

Atomization

Atomization denotes the breaking down of a resource such as a piece of software or collection of data into smaller parts (though the word atomic connotes irreducibility it is never clear what the exact irreducible, or optimal, size for a given part is). For example a given software application may be divided up into several components or libraries. Atomization can happen on many levels.

At a very low level when writing software we break thinks down into functions and classes, into different files (modules) and even group together different files. Similarly when creating a dataset in a database we divide things into columns, tables, and groups of inter-related tables.

But such divisions are only visible to the members of that specific project. Anyone else has to get the entire application or entire database to use one particular part of it. Furthermore anyone working on any given part of one of the application or database needs to be aware of, and interact with, anyone else working on it — decentralization is impossible or extremely limited.

Thus, atomization at such a low level is not what we are really concerned with, instead it is with atomization into Packages:

Packaging

By packaging we mean the process by which a resource is made reusable by the addition of an external interface. The package is therefore the logical unit of distribution and reuse and it is only with packaging that the full power of atomization’s “divide and conquer” comes into play — without it there is still tight coupling between different parts of a given set of resources.

Developing packages is a non-trivial exercise precisely because developing good stable interfaces (usually in the form of a code or knowledge API) is hard. One way to manage this need to provide stability but still remain flexible in terms of future development is to employ versioning. By versioning the package and providing ‘releases’ those who reuse the packaged resource can use a specific (and stable) release while development and changes are made in the ‘trunk’ and become available in later releases. This practice of versioning and releasing is already ubiquitous in software development — so ubiquitous it is practically taken for granted — but is almost unknown in the area of knowledge.

Tricky stuff, but I'm sure it will be worth the effort if the end-result is a practical system for modularisation, since this will allow open content to enjoy many of the evident advantages of open code.

14 March 2007

What Open Access Can Do for Open Content

One of the central ideas behind openness is re-use - the ability to build on what has gone before, rather than re-inventing the wheel. And yet, as this fascinating article demonstrates, there is sometimes surprisingly little sharing and re-use between the various opens:

This study demonstrates among a sample of 100 Wikipedia entries, which included 168 sources or references, only two percent of the entries provided links to open access research and scholarship. However, it proved possible to locate, using Google Scholar and other search engines, relevant examples of open access work for 60 percent of a sub-set of 20 Wikipedia entries. The results suggest that much more can be done to enrich and enhance this encyclopedia’s representation of the current state of knowledge. To assist in this process, the study provides a guide to help Wikipedia contributors locate and utilize open access research and scholarship in creating and editing encyclopedia entries.

I can't help feeling that there is a larger lesson here, and that all the various opens should be doing more to build on each other's strengths as well as their own. After all, it's partly what all this openness is about. Perhaps we need a meta-open movement?

14 February 2007

Free Cultural Works vs. Open Content

Now I wonder where they got the idea for this:

This document defines "Free Cultural Works" as works or expressions which can be freely studied, applied, copied and/or modified, by anyone, for any purpose. It also describes certain permissible restrictions that respect or protect these essential freedoms. The definition distinguishes between free works, and free licenses which can be used to legally protect the status of a free work. The definition itself is not a license; it is a tool to determine whether a work or license should be considered "free."

Here's a further hint:

We discourage you to use other terms to identify Free Cultural Works which do not convey a clear definition of freedom, such as "Open Content" and "Open Access." These terms are often used to refer to content which is available under "less restrictive" terms than those of existing copyright laws, or even for works that are just "available on the Web".

Now, who do we know that prefers the word "free" to "open"?

02 November 2006

Collaborative, Interactive, Open Music

One of the problems with open content is that it's hard to work on it collaboratively and interactively in real time, rather than simply sequentially. This is largely a question of tools: there just aren't any. Well, there weren't: it looks like netpd is a neat distributed open source solution. (Via Futurismic.)

20 July 2006

Open Content: Some Get It, Some Don't

Larry Sanger (who does) explains to Jason Calcanis (who doesn't) what all this open content is really about - and why it isn't going away once companies start waving fistfuls of dosh in the air.

15 July 2006

The Value of the Public Domain

More light reading - this time about the public domain. Or rather, a little beyond the traditional public domain, as the author Rufus Pollock states:

Traditionally, the public domain has been defined as the set of intellectual works that can be copied, used and reused without restriction of any kind. For the purposes of this essay I wish to widen this a little and make the public domain synonymous with ‘open’ knowledge, that is, all ideas and information that can be freely used, redistributed and reused. The word ‘freely’ must be loosely interpreted – for example the requirement of attribution or even that derivative works be re-shared, does not render a work unfree.

It's quite academic in tone, but has some useful references (even if it misses out a crucial one - not that I'm bitter...).