11 April 2006

From Blogroll to Searchroll

I confess it: I'm a sucker for analogical thought. So taking the idea of a blogroll - even if I don't actually use the things - and coming up with a "searchroll" - a personalised list of sites across which you can carry out searches - is intrinsically appealing. This is what Rollyo has done: I know it's not exactly new, but the last time I looked there wasn't much to see. Now there is.

For example, I was rather taken with this Russian library searchroll, which somehow makes the idea nicely concrete. I can see that you might well want to search through a group of related sites, rather than wade through several million Google hits.

The only problem, of course, is that you either have to set up the searchrolls yourself, or try to find one that suits your needs. Fortunately Rollyo has done the obvious, and come up with a search engine for search rolls. From here you can enter keywords and try to piggy-back off someone else's collection (all in the spirit of sharing, of course).

I was impressed with the range of open source searchrolls that are already available. Open access searchrolls are thinner on the ground, while open content is waiting for its first example. Any offers?

10 April 2006

Webaroo - Yawnaroo

Convincing proof that Web 2.0 is a replay of Web 1.0 comes in the form of Webaroo. As this piece from Om Malik explains, this start-up aims to offer users a compressed "best of the Web" that they can carry around on their laptops and use even when they're offline.

Sorry, this idea was invented back in 1995, when Frontier Technologies released its SuperHighway Access CyberSearch, a CD-ROM that contained a "best of the Web" based on Lycos - at the time, one of the best search engines. As I wrote in September 1995:

Not all of the Lycos base has been included: contained in the 608 Mbytes on the disc is information on around 500,000 pages. The search engine is also simplified: whereas Lycos possesses a reasonably powerful search language, the CyberSearch tool allows you to enter just a word or phrase.

Only the scale has changed....

Open Peer Review

Open access does not aim to subvert the peer review process that lies at the heart of academic publishing: it just wants to open things up a little. But you know how it is: once you start this subversive stuff, it's really hard to stop.

So what did I come across recently, but this fascinating hint of what opening up peer review might achieve (as for the how, think blogs or wikis). Maybe an idea whose time has (almost) come.

09 April 2006

(Patently) Right

Paul Graham is a master stylist - indeed, one of the best writers on technology around. Reading his latest essay, "Are Software Patents Evil?" is like floating in linguistic cream. And that's the problem. His prose is so seductive that it is too easy to be hypnotised by his gently-rhythmic cadences, too pleasurable to be lulled into a complaisant state, until you find yourself nodding mechanically in agreement - even with ideas that are, alas, fundamentally wrong.

Take his point in this recent essay about algorithms, where he tries to argue that software patents are OK, even when they are essentially algorithms, because hardware is really only an instantiation of an algorithm.

If you allow patents on algorithms, you block anyone from using what is just a mathematical technique. If you allow patents on algorithms of any kind, then you can patent mathematics and its representations of physics (what we loosely call the Laws of Physics are in fact just algorithms for calculating reality).

But let's look at the objection he raises, that hardware is really just an algorithm made physical. Maybe they are; but the point is you have to work out how to make that algorithm physical - and that's what the patent is for, not for the algorithm itself. Note that such a patent does not block anyone else from coming up with different physical manifestations of it. They are simply stopped from copying your particular idea.

It's instructive to look at another area where patents are being hugely abused: in the field of genes. Thanks to a ruling in 1980 that DNA could be patented, there has been a flood of completely insane patent applications, some of which have been granted (mostly in the US, of course). Generally, these concern genes - DNA that codes for particular proteins. The argument is that these proteins do useful things, so the DNA that codes for them can therefore be patented.

The problem is that there is no way of coming up with an alternative to that gene: it is "the" gene for some particular biological function. So the patent on it blocks everyone using that genomic information, for whatever purpose. What should be patentable - because, let me be clear here, patents do serve a useful purpose when granted appropriately - is the particular use of the protein - not the DNA - the physical instantiation of what is effectively a genomic algorithm.

Allowing patents on a particular industrial use for a protein - not a patent on its function in nature - leaves the door open for others to find other chemicals that can do the same job for the industrial application. It also leaves the DNA as information/algorithm, outside the realm of patents.

This test of whether a patent allows alternative implementations of the underlying idea can be applied fruitfully to the equally-vexed questions of business methods. Amazon's famous "one-click" method of online making purchases is clearly total codswallop as a patent. It is a patent on an idea, and blocks everyone else from implementing that (obvious) idea.

The same can be said about an earlier patent that Oracle applied for, which apparently involved the conversion of one markup language into another. As any programmer will tell you, this is essentially trivial, in the mathematical sense that you can define a set of rules - an algorithm - and the whole drops out automatically. And if you apply the test above - does it block other implementations? - this clearly does, since if such a patent were granted, it would stop everyone else coming up with algorithms for conversions. Worse, there would be no other way to do it, since the process is simply a restatement of the problem.

I was heartened to see that a blog posting on this case by John Lambert, a lawyer specialising in intellectual property, called forth a whole series of comments that explored the ideas I've sketched out above. I urge you to read it. What's striking is that the posts - rather like this one - are lacking the polish and poise of Graham's writing, but they more than make up for it in the passion they display, and the fact that they are (patently) right.

08 April 2006

Death to the Podcast

"Podcast" is such a cool word. It manages to be familiar, made up as it is of the odd little "pod" and suffix "-cast", as in "broadcast", and yet cheekily new. Pity, then, that it's completely the wrong term for what it describes.

These are simply downloadable mp3 files. The "pod" bit is a misnomer, because the iPod is but one way to listen to them: any mp3 player will do. And the "-cast" is wrong, too, because they are not broadcast in any sense - you just download them. And if they were broadcast across the Internet, then you'd call them streams - as in "podstream", rather than "podcast".

Given my long-standing dislike of this term - and its unthinking adoption by a mainstream press terrified of looking uncool - I was pleased to come across Jack Schofield's opinion on the subject, where he writes:

[P]odcasting's main appeal at the moment is time-shifting professionally-produced programmes. It's a variant of tape recording, and should probably be called AOD (audio on demand).

AOD: that sounds good to me, Jack.

His wise suggestion comes in piece commenting on the release of a typically-expensive ($249 for six pages) piece of market research on this sector from Forrester Research.

Many people have taken its results - the fact that only 1% of online households in the US regularly download and listen to AOD - to indicate the death of the medium. I don't agree: I think people will continue to enjoy audio on demand in many situations. For example, I regularly return to the excellent Chinesepod site, a shining example of how to use AOD well.

But even if the downloads live on, I do hope that we might see the death of the term "podcast".

07 April 2006

Another Day, Another Open

Open content is an area that I follow quite closely. I've just finished the second of a series of articles for LWN.net that traces the growth of open content and its connections with open source. The first of these is on open access, while the most recent looks at Project Gutenberg and the birth of open content. The next will look at open content in education, including the various open courseware projects.

Here's a report from UNESCO on the area, which it has dubbed open educational resources, defined as

the creation of open source software and development tools, the creation and provision of open course content, and the development of standards and licensing tools.

I'm not quite sure we really needed a new umbrella term for this, but it's good to see the matter being discussed at high levels within the global education community.

A Nod's as Good as a Wink

As I mentioned, I have started playing with Google Analytics for this blog. It's early days yet, but already some fascinating results have dropped out - I'll be reporting on them once the trends become slightly more significant than those based on two days' data....

But one thing just popped up that I thought I'd pass on. Some of my traffic has come from Wink, which describes itself as a social search engine. More specifically:

Wink analyzes tags and submissions from Digg, Furl, Slashdot, Yahoo MyWeb, and other services, plus user-imported tags from del.icio.us, and favorites marked at Wink, and figure[s] out which pages are most relevant.

So basically Wink aims to filter standard Web search results through the grid of social software like Digg, del.icio.us etc. It's a clever idea, although the results at the moment are a little, shall we say, jejune. But I'm grateful for the tip that Google Analytics - and one of my readers - has given me. Duly noted.

UNDPAPDIPIOSN - A Name to Remember?

The UN is such a huge, amorphous organisation that it is no suprise that there are bits of it that rarely make it into the limelight. A case in point is the UN Development Programme (UNDP), "the UN's global development network, an organization advocating for change and connecting countries to knowledge, experience and resources to help people build a better life."

Given its task, and its doubtless limited resources, it is only natural that the UNDP has been promoting free software use around the world longer than most (I first talked to them about it in 1997), and its efforts in this sphere are becoming significant. It now has a separate arm, the United Nations Development Programme's (UNDP) Asia Pacific Development Information Programme (APDIP) International Open Source Network - UNDPAPDIPIOSN for short.

As a quick glance at the home page shows, there's lot of good stuff going on, with all the right buzzwords. For example, news on the Asian Commons, a press release about the UNDPAPDIPIOSN joining the ODF Alliance, which pushes for ODF adoption (and complements the OpenDocument Fellowship I mentioned yesterday), plus some free software primers.

What I like about these is that they take a truly global view of things, providing information about open source adoption around the world that is hard to come by elsewhere, particularly in a consolidated form. They deserve to be better known - as does the UNDPAPDIPIOSN itself - although probably not under that name....(IOSN seems to be the preferred abbreviation).

06 April 2006

Microsoft's Open Source Blog

Not something you see every day: a Web site called Port 25, with the explanatory line "Communications from the Open Source Software Lab @ Microsoft". Yup, you read that right. It will be interesting to see what they do with this apparent attempt to reach out (port 25, right?) - especially if they can get rid of the Russian spam on Bill Hilf's welcoming post....

The Commons Becomes Commoner

I've already written about how the "commons" meme is on the rise, with all that this implies in terms of co-operation, sharing and general open source-iness. Now here are two more.

The first is the Co-operation Commons, "an interdisciplinary framework for understanding cooperation" (an excellent, fuller explanation can be found here). The second is the Credibility Commons, "an experimental environment enabling individuals the opportunity to try out different approaches to improving access to credible information on the World Wide Web."

As the commons becomes, er, commoner, I find that it is all just getting more and more interesting.

ODF Petition Hits 10,000

I'm a big fan of the OpenDocument Fellowship - there's something very Tolkienian about it, and it's one of the best places to keep on top of developments in this important area.

One of its projects is a petition trying to persuade Microsoft to support the Open Document Format. This may be somewhat of a forlorn hope, but then they do say that a gentleman only supports lost causes.

If you wish to add your name to the 10,000 or so who have shown their gentlemanly/ladylike credentials in this way, you can do so here.

Why VOIP Needs Crypto

The ever-wise Bruce Schneier (whom I had the pleasure of interviewing a couple of years ago) spells out in words of one syllable why the hot Voice over IP digital 'phone systems absolutely need encryption. He also links to the perfect solution: Phil Zimmermann's latest wheeze, Zfone - an open source VOIP encryption program.

In Praise of Nakedness

The impudence of Microsoft knows no bounds. According to this report in ZDNet UK, it is now doing the nudge-nudge, wink-wink to PC dealers that selling "naked" systems - those with no operating system installed - might be inadvisable, know what I mean?

According to the original article in its Partner Update magazine, "Microsoft is recruiting two 'feet on the street' personnel whose role will be to provide proactive assistance during customer visits" in other words, threatening to send in the heavy mob to duff up customers, too. Although Microsoft hurriedly "retracted" this part of the story, if it was a slip, it was surely Freudian.

From this and other statements, it's clear that Microsoft sees every PC as its birthright, and naked PCs as, well, positively obscene. I'm an OS naturist, myself.

05 April 2006

After Wikia, Qwika: the Wiki Search Engine

The last time I wrote about Qwika, it seemed to be a solution in search of a problem. A recent press release suggests that it's managed to come up with an answer to that conundrum: Qwika has turned into a dedicated wiki search engine.

At first sight, you might think that's rather redundant. After all, wikis are essentially just Web pages, and one or two search engine companies seem to have that sector sorted out. But if you only want to look in wikis, and don't want the other million hits on ordinary Web pages that common words throw up, a dedicated wiki search engine makes sense.

Moreover, wikis do have some special characteristics, as Qwika's Luke Metcalfe explained to me:

[W]ikis are quite different to html documents - they have a good amount of metadata associated with them - edit histories, user information, and data embedded within the WikiMedia format. They conform also [to] a certain writing style, which makes things easier to parse from a computational linguistic perspective. Other search engines are only interested in them as html documents with links pointing to them, so they miss out on a lot.

It's early days yet - both for Qwika and the wikis it indexes (1,158 at the time of writing). But recent moves like Wales' Wikia relaunch, which I wrote about the other day, mean that the wiki space is starting to hot up.

So, in the "One to Watch" category, to Wikia, add Qwika.

Blender - Star of the First Open Source Film

Blender is one of the jewels in the open source crown. As its home page puts it:

Blender is the open source software for 3D modeling, animation, rendering, post-production, interactive creation and playback. Available for all major operating systems under the GNU General Public License.

It's a great example of how sophisticated free software can be - if you haven't tried it, I urge you to do so. It's also an uplifing story of how going open source can really give wings to a project.

Now Blender is entering an exciting new phase. A few days ago, the premiere of Elephant's Dream, the first animated film made using Blender, took place.

What's remarkable is not just that this was made entirely with open source software, but also that the film and all the Blender files are being released under a Creative Commons licence - making it perhaps the first open source film.

Given that most commercial animation films are already produced on massive GNU/Linux server farms, it seems likely that some companies, at least, will be tempted to dive even deeper into free software and shift from expensive proprietary systems to Blender. Whether using all this zero-cost, luvvy-duvvy GPL'd software makes them any more sympathetic to people sharing their films for free remains to be seen....

United States of Patent Absurdity

If you ever wondered how the US got into such a mess with patents on software and business methods - and wondered how the European Union can avoid making the same mistakes - take a look at this excellent exposition. As far as I can tell, it's a condensed version of the full half-hour argument in the 2004 book Innovation and Its Discontents: How Our Broken Patent System is Endangering Innovation and Progress, and What to Do About It.

Read either - or both; then weep.

Daily Me 2.0

One of the problems with blogs for advertisers is their fragmented nature: to get a reach comparable to mainstream media generally involves faffing around with dozens of sites. The obvious solution is to bundle, and that's precisely what Federated Media Publishing does. As its roster of blogs indicates, it operates mainly in the field of tech blogs, but the model can clearly be extended.

To the average blog-reader on the Clapham Ominibus (probably the 319 these days), more interesting than the business side of things is the possibility of doing blog mashups. And lo and behold, Federated has produced such a thing (note that the URL begins significantly with "tech", hinting of non-tech things to come...).

What struck me about this federated news idea is that it could be extended beyond the bundles. It would be easy - well, easy if you're a skilled programmer - to knock up a tool offering a range of newspage formats that let you drag and drop newsfeeds into predefined slots to produce the same kind of effect as the Federated Media/Tech one.

RSS aggregators already do this crudely, but lack the design element that would help to make the approach more popular. You would also need some mechanism for flagging up which stories had changed on the page, or for allowing new stories from designated key blogs to rise to the top of the dynamically-generated newspage.

The result, of course, is the Daily Me that everyone has been wittering on about for years. But it comes with an important twist. This Daily Me 2.0 is not just a cosmetic mixing of traditional medium news, but a very different kind of online newspage, based on the very different perspective offered by blogs.

One reason why Daily Me 1.0 never took off was because traditional media are simply too greedy to contemplate sharing with anyone else. Blogs have no such qualms - indeed, they have different kinds of sharing (quotations, links, comments) at their core. I think we'll be reading more about this....

Privacy Policy

As part of my exploration of Google, I've signed up for Google Analytics. This means that these pages now collect some anonymous traffic data - nothing personal. For the same reason, the site will ask whether it can set a cookie (one of Netscape's more enduring legacies). If you don't want one on your computer, just refuse: it won't make any difference to your access.

04 April 2006

Exploring the Digital Universe

The Digital Universe - a kind of "When Larry (Sanger) left Jimmy (Wales)" story - remains a somewhat nebulous entity. In some ways, it's forward to the past, representing a return to the original Nupedia that Larry Sanger worked on before Wikipedia was founded. In other respects, it's trying a new kind of business model that looks brave, to put it mildly.

Against this background, any insight into the what and how of the Digital Universe is welcome, and this article on the "eLearning Scotland" site (CamelCase, anyone?) provides both (via Open Access News). Worth taking a look.

Coughing Genomic Ink

One of the favourite games of scholars working on ancient texts that have come down to us from multiple sources is to create a family tree of manuscripts. The trick is to look for groups of textual divergences - a word added here, a mis-spelling there - to spot the gradual accretions, deletions and errors wrought by incompetent, distracted or bored copyists. Once the tree has been established, it is possible to guess what the original, founding text might have looked like.

You might think that this sort of thing is on the way out; on the contrary, though, it's an extremely important technique in bioinformatics - hardly a dusty old discipline. The idea is to treat genomes deriving from a common ancestor as a kind of manuscript, written using just the four letters - A, C, G and T - found in DNA.

Then, by comparing the commonalities and divergences, it is possible to work out which manuscripts/genomes came from a common intermediary, and hence to build a family tree. As with manuscripts, it is then possible to hazard a guess at what the original text - the ancestral genome - might have looked like.

That, broadly, is the idea behind some research that David Haussler at the University of California at Santa Cruz is undertaking, and which is reported on in this month's Wired magazine (freely available thanks to the magazine's enlightened approach to publishing).

As I described in Digital Code of Life, Haussler played an important role in the closing years of the Human Genome Project:

Haussler set to work creating a program to sort through and assemble the 400,000 sequences grouped into 30,000 BACs [large-scale fragments of DNA] that had been produced by the laboratories of the Human Genome Project. But in May 2000, when one of his graduate students, Jim Kent, inquired how the programming was going, Haussler had to admit it was not going well. Kent had been a professional programmer before turning to research. His experience in writing code against deadlines, coupled with a strongly-held belief that the human genome should be freely available, led him to volunteer to create the assembly program in short order.

Kent later explained why he took on the task:

There was not a heck of a lot that the Human Genome Project could say about the genome that was more informative than 'it's got a lot of As, Cs, Gs and Ts' without an assembly. We were afraid that if we couldn't say anything informative, and thereby demonstrate 'prior art', much of the human genome would end up tied up in patents.

Using 100 800 MHz Pentiums - powerful machines in the year 2000 - running GNU/Linux, Kent was able to lash up a program, assemble the fragments and save the human genome for mankind.

Haussler's current research depends not just on the availability of the human genome, but also on all the other genomes that have been sequenced - the different manuscripts written in DNA that have come down to us. Using bioinformatics and even more powerful hardware than that available to Kent back in 2000, it is possible to compare and contrast these genomes, looking for tell-tale signs of common ancestors.

But the result is no mere dry academic exercise: if things go well, the DNA text that will drop out at the end will be nothing less than the genome of one of our ancient forebears. Even if Wired's breathless speculations about recreating live animals from the sequence seem rather wide of the mark - imagine trying to run a computer program recreated in a similar way - the genome on its own will be treasure enough. Certainly not bad work for those scholars who "cough in ink" in the world of open genomics.

Ozymandias in Blogland

A fascinating post on Beebo (via C|net): a list of the top 50 blogs, six years ago. It's interesting to see some familiar names at the top, but even more interesting to see so many (to me) unknown ones.

"Look on my works, ye mighty, and despair!" was my first thought. My second, was to create a blog called "Ozymandias" on Blogger, so that I could link to it from this post. But somebody already beat me to it.

Its one - and only - post is dated Sunday, January 07, 2001.

Look on my works, ye mighty, and despair!

03 April 2006

To DRM or Not to DRM - That is the Question

Digital Rights Management - or Digital Restrictions Management as Richard Stallman likes to call it - is a hot topic at the moment. It figured largely in an interview I did with the FSF's Eben Moglen, which appeared in the Guardian last week. Here's the long version of what he had to say on DRM:

In the year 2006, the home is some real estate with appliances in it. In the year 2016, the home is a digital entertainment and data processing network with some real estate wrapped around it.

The basic question then is, who has the keys to your home? You or the people who deliver movies and pizza? The world that they are thinking about is a world in which they have the keys to your home because the computers that constitute the entertainment and data processing network which is your home work for them, rather than for you.

If you go to a commercial MIS director and you say, Mr VP, I want to put some computers inside your walls, inside your VPN, on which you don't have root, and you can't be sure what's running there. But people outside your enterprise can be absolutely certain what software is running on that device, and they can make it do whatever they think necessary. How do you feel about that? He says, No, thank you. And if we say to him, OK, how about then if we do that instead in your children's house? He says, No, thank there either.

That's what this is about for us. User's rights have no more deep meaning than who controls the computer your kid uses at night when he comes home. Who does that computer work for? Who controls what information goes in and out on that machine? Who controls who's allowed to snoop, about what? Who controls who's allowed to keep tabs, about what? Who controls who's allowed to install and change software there? Those are the question which actually determine who controls the home in 2016.

This stuff seems far away now because, unless you manage computer security for a business, you aren't fully aware of what it is to have computers you don't control part of your network. But 10 years from now, everybody will know.

Against this background, discussions about whether Sun's open source DRM solution DReaM - derived from "DRM/everywhere available", apparently - seem utterly moot. Designing open source DRM is a bit like making armaments in an energy-efficient fashion: it rather misses the point.

DRM serves one purpose, and one purpose only: to control users. It is predicated on the assumption that most people - not just a few - are criminals ready to rip off a company's crown jewels - its "IP" - at a moment's notice unless the equivalent of titanium steel bars are soldered all over the place.

I simply do not accept this. I believe that most people are honest, and the dishonest ones will find other ways to get round DRM (like stealing somebody's money to pay for it).

I believe that I am justified in making a copy of a CD, or a DVD, or a book provided it is for my own use: what that use is, is no business of the company that sold it to me. What I cannot do is make a copy that I sell to someone else for their use: clearly that takes away something from the producers. But if I make a backup copy of a DVD, or a second copy of a CD to play in the car, nobody loses anything, so I am morally quite justified in this course of action.

Until the music and film industries address the fundamental moral issues - and realise that the vast majority of their customers are decent, honest human beings, not crypto-criminals - the DRM debate will proceed on a false basis, and inevitably be largely vacuous. DRM is simply the wrong solution to the wrong problem.

The Birth of Emblogging

I've written before about the blogification of the cyber union - how everything is adopting a blog-like format. Now comes a complementary process: emblogging, or embedding blogs directly into other formats.

This flows from the observation that blogs are increasingly at the sharp end of reporting, beating staider media like mere newspapers (even their online incarnations) to the punch. There is probably some merit in this idea, since bloggers - the best of them - are indeed working around the clock, unable to unplug, whereas journalists tend to go home and stop. And statistically this means that some blogger, somewhere, is likely to be online and writing when any given big story breaks. So why not embed the best bits from the blogs into slow-moving media outlets? Why not emblog?

Enter BlogBurst, a new company that aims to mediate between those bloggers and the traditional publications (I discovered this, belatedly, via TechCrunch). The premise seems sensible enough, but I have my doubts about the business model. At the moment, the mainsteam media get the goods, BlogBurst gets the dosh, and the embloggers get, er, the glory.

Still, an interesting and significant development in the rise and rise of the blog.

02 April 2006

Wiki Wiki Wikia

Following one of my random wanders through the blogosphere I alighted recently on netbib. As the site's home page explains, this is basically about libraries, but there's much more than this might imply.

As a well as a couple of the obligatory wikis (one on public libraries, the other - the NetBibWiki - containing a host of diverse information, such as a nice set of links for German studies), there is also a useful collection of RSS feeds from the library world, saved on Bloglines.

The story that took me here was a post about something called Wikia, which turns out to be Jimmy Wales' wiki company (and a relaunch of the earlier Wikicities). According to the press release:

Wikia is an advertising-supported platform for developing and hosting community-based wikis. Specifically, Wikia enables groups to share information, news, stories, media and opinions that fall outside the scope of an encyclopedia. Jimmy Wales and Angela Beesley launched Wikia in 2004 to provide community-based wikis inspired by the model of Wikipedia--the free, open source encyclopedia founded by Jimmy Wales and operated by the Wikimedia Foundation, where Wales and Beesley serve as board members.

Wikia is committed to openness, inviting anyone to contribute web content. Authors retain their own copyrights, but allow others to freely reuse their content under the GNU Free Documentation License, allowing widespread distribution of knowledge and ideas.

Wikia supports the development of the open source software that runs both Wikipedia and Wikia, as well as thousands of other wiki sites. Among other contributions, Wikia plans to enhance the software with usability features, spam prevention, and vandalism control. All of Wikia's development work will, of course, be fed back into the open source code.

In a sense, then, this is yet more of the blogification of the online world, this time applied to wikis.

But I'm not complaining: if that nice Mr Wales can make some money and feed back improvements to the underlying MediaWiki software used by Wikipedia and many other wikis, all to the good. I just hope that the dotcom 2.0 bubble lasts long enough (so that's why they used the Hawaiian word for "quick" in the full name "wiki wiki").

01 April 2006

Open Access Opens the Throttle

It's striking that, so far, open access has had a relatively difficult time making the breakthrough into the mainstream - despite the high-profile example of open source to help pave the way. Whether this says something about institutional inertia, or the stubbornness of the forces ranged against open access, is hard to say.

Against this background, a post (via Open Access News) on the splendidly-named "The Imaginary Journal of Poetic Economics" blog (now why couldn't I have thought of something like that?) is good news.

Figures from that post speak for themselves:
In the last quarter, over 780,000 records have been added to OAIster, suggesting that those open access archives are beginning to fill! There are 170 more titles in DOAJ, likely an understated increase due to a weeding project. 78 titles have been added to DOAJ in the past 30 days, a growth rate of more than 2 new titles per day.

OAIster refers to a handy central search site for "freely available, previously difficult-to-access, academically-oriented digital resources", while DOAJ is a similarly-indispensable directory of open access journals. The swelling holdings of both augur well for open access, and offer the hope that the breakthrough may be close.

Update: An EU study on the scientific publishing market comes down squarely in favour of open access. As Peter Suber rightly notes, "this is big", and is likely to give the movement a considerable boost.