08 August 2009

Patenting the Barcode of Life

Talking of DNA, another brilliant use of it - and brilliantly obvious like all great ideas - is DNA Barcoding:

DNA barcoding is a new technique that uses a short DNA sequence from a standardized and agreed-upon position in the genome as a molecular diagnostic for species-level identification. DNA barcode sequences are very short relative to the entire genome and they can be obtained reasonably quickly and cheaply. The "Folmer region" at the 5' end of the cytochrome c oxidase subunit 1 mitochondrial region (COI) is emerging as the standard barcode region for almost all groups of higher animals. This region is 648 nucleotide base pairs long in most groups and is flanked by regions of conserved sequences, making it relatively easy to isolate and analyze. A growing number of studies have shown that COI sequence variability is very low (generally less than 1-2%) and that the COI sequences of even closely related species differ by several percent, making it possible to identify species with high confidence.

However, readers of this will probably have guessed the fly in the ointment here: DNA barcoding is such a powerful idea that the parasites have moved in, and started trying to *patent* bits of the idea:

Systematic and phylogenetics, indeed much of evolutionary science, has long and great tradition of making resources and knowledge freely available to other resources. Instead of cash, all an author asks for is a citation or a credit. Therefore, it sounded incredulous to me that one researcher was trying to patent a DNA barcode snippet for a plant gene that was being worked on over several years by a large group of researchers.

It's a classic situation: not only are scientific techniques being patented, they are techniques that are well established and have been used for years - something that is explicitly excluded even in the most deranged patent regimes. And people say the system is working just fine... (Via Jonathan Eisen.)

Follow me @glynmoody on Twitter and identi.ca.

The Real Hope for Nanotechnology

Nanotechnology is one of those subjects that seem to veer between hope and hype. DNA-based solutions look among the most promising, because of the fact that the material has evolved to solve many of the same problems as nanotechnology; more subtly, it is inherently digital, which makes its manipulation much easier - and promises structures of almost infinite complexity under computer control.

To do that, or course, you need software, so it's great to see that there is already free software that lets you create DNA-based nanostructures:

caDNAno is open-source software based on the Adobe AIR platform for design of three-dimensional DNA origami nanostructures. It was written with the goal of providing a fast and intuitive means to create and modify DNA origami designs. You can learn how to use it, download a copy of the program and some example designs, or even modify the source code.

The software makes heavy use of several fantastic open-source libraries and resources, especially Papervision3D for 3D rendering, Michael Baczynski's AS3 data structures and tutorials, the Tango Desktop Project for icons, and the Blueprint CSS framework for this website. Additional people and resources are acknowledged on the links page.

As you can see from this, there's already quite a rich ecosystem of free code in this area, which augurs well for the future. The last thing we need is for nanotechnology to turn into the smallest black box ever made.

Follow me @glynmoody on Twitter and identi.ca.

07 August 2009

The Most Hated Man Online?

Well, not quite, but it's clear somebody really dislikes the Twitter user @cyxymu: it seems that the coordinated attack on Twitter, Facebook and LiveJournal were to silence him:

A Georgian blogger with accounts on Twitter, Facebook, LiveJournal and Google's Blogger and YouTube was targeted in a denial of service attack that led to the site-wide outage at Twitter and problems at the other sites on Thursday, according to a Facebook executive.

The blogger, who uses the account name "Cyxymu," (the name of a town in the Republic of Georgia) had accounts on all of the different sites that were attacked at the same time, Max Kelly, chief security officer at Facebook, told CNET News.

"It was a simultaneous attack across a number of properties targeting him to keep his voice from being heard," Kelly said. "We're actively investigating the source of the attacks and we hope to be able to find out the individuals involved in the back end and to take action against them if we can."

Sounds pretty incredible, but the chap himself confirms it on his Twitter account:

да, меня ДДоСили

which roughly means "yup, I was DDoS'd", and he also opines:

this hackers was from Russian KGB

Supporting this view is the fact that his LiveJournal blog is still unreachable.

Fascinating, of course, to see how events in the Caucasus - today's the first anniversary of the ill-advised attack of Georgia on South Ossetia, and Russia's gleeful counter-attack on Georgia - reach and affect even global online worlds like Twitter and Facebook. Interesting times.

Follow me @glynmoody on Twitter and identi.ca.

06 August 2009

UK ID Card Technology Cloned...

...in 12 minutes:


Embedded inside the card for foreigners is a microchip with the details of its bearer held in electronic form: name, date of birth, physical characteristics, fingerprints and so on, together with other information such as immigration status and whether the holder is entitled to State benefits.

This chip is the vital security measure that, so the Government believes, will make identity cards 'unforgeable'.

But as I watch, Laurie picks up a mobile phone and, using just the handset and a laptop computer, electronically copies the ID card microchip and all its information in a matter of minutes.

He then creates a cloned card, and with a little help from another technology expert, he changes all the information the card contains - the physical details of the bearer, name, fingerprints and so on. And he doesn't stop there.

With a few more keystrokes on his computer, Laurie changes the cloned card so that whereas the original card holder was not entitled to benefits, the cloned chip now reads 'Entitled to benefits'.

No surprises there, of course; but what's significant is that it's the Daily Mail that's pushing this jolly news out to its assembled readers. This means the message is going out to groups beyond the obvious Guardian greeny-lefties and Telegraph Tories.

The UK government will presumably just carry on blithely ignoring these inconvenient demonstrations of the deep lack of security at the heart of this lunatic project. Worryingly, it comes in a week where UK High Court ruled that the government's not liable for the consequences of its errors, which means that when your ID card is cloned and abused, *you* will have to foot the bill....(via Ray Corrigan.)

Rupert's Roller-Coaster

It's hard keeping up with Rupert Murdoch's fortunes on the Internet.

First, he blew it: he ignored the Net, declaring it of no interest. Then he hit the jackpot, buying MySpace for what seemed an incredibly low price: just $580 million, when Facebook was being valued at billions. That's looking expensive today:

News Corp specifically blames MySpace for a loss of $363 million to the company’s bottom line

And now, it looks like Rupert has really lost it:

"Quality journalism is not cheap," said Murdoch. "The digital revolution has opened many new and inexpensive distribution channels but it has not made content free. We intend to charge for all our news websites."

Good luck with that, Rupe.

I think it's interesting that I almost never quote from or link to News International titles: there's simply too little there of interest. By contrast, I *do* link quite often to New York Times and Guardian stories, both of which offer stuff not covered elsewhere. So I don't think I'm going to miss Mr Murdoch's titles when they suddenly fall off the digital radar...

05 August 2009

Opencast Matterhorn

Daft name, great idea:

As a growing number of worldwide learners log on, free of charge, to video and podcast lectures and events at the University of California, Berkeley, the campus is leading an international effort to build a communal Webcasting platform to more easily record and distribute its popular educational content.

Dubbed "Opencast Matterhorn" and funded with grants from the Andrew W. Mellon and William and Flora Hewlett foundations totaling $1.5 million, the project will bring together programmers and educational technology experts from an international consortium of higher education institutions, including ETH Zürich in Switzerland, University of Osnabrück in Germany, Cambridge University in the United Kingdom and Canada's University of Saskatchewan.

..

The software will support the scheduling, capture, encoding and delivery of educational content to video-and-audio sharing sites such as YouTube and iTunes, so that learners can access lectures when and where they need it. With additional funding, expertise and labor from other members of the consortium, the Opencast Matterhorn platform is scheduled to be up and running by summer 2010.

They've got a new word for it (well, new to me):

Coursecasting is a growing trend in educational technology, enabling students and the general public to download audio and video recordings of class lectures to their computers and portable media devices.

Daft names aside, it's great to see institutions working together on a common platform like this; it should give a real boost to opencourseware - and may be even coursecasting. (Via Open Education News.)

Follow me @glynmoody on Twitter and identi.ca.

04 August 2009

Level Playing-Fields and Open Access

Yesterday, I wrote elsewhere about open standards, and how they sought to produce a level playing field for all. Similar thoughts have occurred to Stuart Shieber in this post about open access:


In summary, publishers see an unlevel playing field in choosing between the business models for their journals exactly because authors see an unlevel playing field in choosing between journals using the business models.

He has an interesting solution:

To mitigate this problem—to place open-access processing-fee journals on a more equal competitive footing with subscription-fee journals—requires those underwriting the publisher's services for subscription-fee journals to commit to a simple “compact” guaranteeing their willingness to underwrite them for processing-fee journals as well.

He concludes:

If all schools and funders committed to the compact, a publisher could more safely move a journal to an open-access processing-fee business model without fear that authors would desert the journal for pecuniary reasons. Support for the compact would also send a signal to publishers and scholarly societies that research universities and funders appreciate and value their contributions and that universities and funders promoting self-archiving have every intention of continuing to fund publication, albeit within a different model. Publishers willing to take a risk will be met by universities and funding agencies willing to support their bold move.

The new US administration could implement such a system through simple FRPAA-like legislation requiring funding agencies to commit to this open-access compact in a cost-neutral manner. Perhaps reimbursement would be limited to authors at universities and research institutions that themselves commit to a similar compact. As funding agencies and universities take on this commitment, we might transition to an efficient, sustainable journal publishing system in which publishers choose freely among business models on an equal footing, to the benefit of all.

Follow me @glynmoody on Twitter and identi.ca.

03 August 2009

Wolfram Alpha Does Not Understand Copyright

Remember Wolfram Alpha? That super-duper search engine - sorry, "computational knowledge engine" - that was going to change the way we looked for and found information, and also cure the common cold (OK, I made that last one up)? Seems to have disappeared without trace, no? I'm not surprised, if it misunderstands copyright as badly as this post suggests:

Try cutting and pasting from the results page. You can't, and with good reason. According to Wolfram Alpha's terms of use, its knowledge engine is "an authoritative source of information," because "in many cases the data you are shown never existed before in exactly that way until you asked for it." Therefore, "failure to properly attribute results from Wolfram Alpha is not only a violation of [its license terms], but may also constitute academic plagiarism or a violation of copyright law."

Copyright, as Wolfram seems not to understand, is a bargain between creators and their public. As an *incentive* to create, the former are given a time-limited monopoly by governments. Note that it is *not* a reward for having created: it is an incentive to create again.

Now consider Wolfram Alpha. This is essentially a computational process - remember, it's a "computational knowledge engine". So, it is simply a bunch of algorithms acting on data. Algorithms don't need incentives to create: outputting is what they do if they're useful. So copyright is completely inappropriate, just as it would be for the output of any other program processing information on its own (obviously, if that information is words fed in by a human, copyright would exist in those words because they were created by someone).

Wolfram's ridiculous claim to copyright in its results does have the virtue of providing a nice illustration of the real limits of this intellectual monopoly. For the rest, it might try finding out a bit more about copyright so that it can amend its licence accordingly - I suggest using a good search engine like Google.

Follow me @glynmoody on Twitter and identi.ca.

02 August 2009

If You Have Nothing to Hide...

Er, shouldn't this utter insanity be sounding one or two alarm bells...please?


Thousands of the worst families in England are to be put in “sin bins” in a bid to change their bad behaviour, Ed Balls announced yesterday.

The Children’s Secretary set out £400million plans to put 20,000 problem families under 24-hour CCTV super-vision in their own homes.

They will be monitored to ensure that children attend school, go to bed on time and eat proper meals.

Private security guards will also be sent round to carry out home checks, while parents will be given help to combat drug and alcohol addiction.

Desite certain protestations to the contrary, isn't this rather clearly a total surveillance society, complete with jack-booted "security" guards? Why not just call them "Security Services" - "SS" for short - and be done with it?

Follow me @glynmoody on Twitter and identi.ca.

01 August 2009

Glad They Chose Windows?

I doubt it somehow:

Many potential buyers of laptops priced under $300 in the U.S. had an unpleasant surprise over the weekend: The machines would not be eligible for a free upgrade to Microsoft's upcoming Windows 7 operating system.

Wal-Mart and Best Buy attracted plenty of buyers during a promotional offering of laptops priced under $300. Some of those laptops sold out just one day after the offers began. The prices were respectable considering the generous features, including large screens, better graphics and DVD drives, which are not typically found in most low-cost netbooks.

However, the laptops came preloaded with the Windows Vista Home Basic operating system, which does not include a free upgrade to Windows 7 in the U.S. Instead, consumers will have to shell out about $120 to upgrade the operating system.

So, that's a $120 hidden cost of choosing Windows: nice move.

Follow me @glynmoody on Twitter and identi.ca.

31 July 2009

Hell Goes Sub-Zero, Sony Does Open Source

For me, Sony has always been the antithesis of open source. So this comes as something of a shock:

Sony Pictures Imageworks, the award-winning visual effects and digital character animation unit of Sony Pictures Digital Productions, is launching an open source development program, it was announced today by Imageworks' chief technology officer, Rob Bredow. Five technologies will be released initially:

* OSL, a programmable shading language for rendering
* Field3d, a voxel data storage library
* Maya Reticule, a Maya Plug-in for camera masking
* Scala Migration, a database migration tool
* Pystring, python-like string handling in C++

Imageworks' production environment, which is known for its photo-real visual effects, digital character performances, and innovative technologies to facilitate their creation, has incorporated open source solutions, most notably the Linux operating system, for many years. Now the company is contributing back to the open source community by making these technologies available. The software can be used freely around the world by both large and small studios. Each project has a team of passionate individuals supporting it who are interested in seeing the code widely used. The intention of the open source release is to build larger communities to adopt and further refine the code.

OK, so it's only a tiny, non-mainstream bit of Sony, but it's surely a sign of the times....

Follow me @glynmoody on Twitter @glynmoody and identi.ca.

Why Single Sign On Systems Are Bad

Wow, here's a really great article about identity management from, um, er, Microsoft. Actually, it's a rather remarkable Microsoft article, since it contains the following sentences:

On February 14, 2006, Microsoft Chairman Bill Gates declared that passwords would be gone where the dinosaurs rest in three to four years.

But as I write this in March 2009, it is pretty clear that Bill was wrong.

But it's not for that frisson that you should read it; it's for the following insight, which really needs hammering home:

The big challenge with respect to identity is not in designing an identity system that can provide SSO [Single Sign On], even though that is where most of the technical effort is going. It's not even in making the solution smoothly functioning and usable, where, unfortunately, less effort is going. The challenge is that users today have many identities. As I mentioned above, I have well over 100. On a daily basis, I use at least 20 or 25 of those. Perhaps users have too many identities, but I would not consider that a foregone conclusion.

The purist would now say that "SSO can fix that problem." However, I don't think it is a problem. At least it is not the big problem. I like having many identities. Having many identities means I can rest assured that the various services I use cannot correlate my information. I do not have to give my e-mail provider my stock broker identity, nor do I have to give my credit card company the identity I use at my favorite online shopping site. And only I know the identity I use for the photo sharing site. Having multiple identities allows me to keep my life, and my privacy, compartmentalized.

Yes yes yes yes yes. *This* is what the UK government simply does not want to accept: creating a single, all-powerful "proof" of identity is actually exactly the wrong thing to do. Once compromised, it is hugely dangerous. Moreover, it gives too much power to the provider of that infrastructure - which is precisely why the government *loves* it. (Via Ideal Government.)

Follow me @glynmoody on Twitter @glynmoody and identi.ca.

Open Source Cognitive Science

A new site with the self-explanatory name of "Open source cognitive science" has an interesting opening post about Tools for Psychology and Neuroscience, pointing out that:

Open source tools make new options available for designing experiments, doing analysis, and writing papers. Already, we can see hardware becoming available for low-cost experimentation. There is an OpenEEG project. There are open source eye tracking tools for webcams. Stimulus packages like VisionEgg can be used to collect reaction times or to send precise timing signals to fMRI scanners. Neurolens is a free functional neuroimage analysis tool.

It also has this information about the increasingly fashionable open source statistics package R that was news to me, and may be of interest to others:

R code can be embedded directly into a LaTeX or OpenOffice document using a utility called Sweave. Sweave can be used with LaTeX to automatically format documents in APA style (Zahn, 2008). With Sweave, when you see a graph or table in a paper, it’s always up to date, generated on the fly from the original R code when the PDF is generated. Including the LaTeX along with the PDF becomes a form of reproducible research, rooted in Donald Knuth’s idea of literate programming. When you want to know in detail how the analysis was done, you need look no further than the source text of the paper itself.

Follow me @glynmoody on Twitter @glynmoody and identi.ca.

30 July 2009

Transparency Saves Lives

Here's a wonderful demonstration that the simple fact of transparency can dramatically alter outcomes - and, in this case, save lives:

Outcomes for adult cardiac patients in the UK have improved significantly since publication of information on death rates, research suggests.

The study also found more elderly and high-risk patients were now being treated, despite fears surgeons would not want to take them on.

It is based on analysis of more than 400,000 operations by the Society for Cardiothoracic Surgery.

Fortunately, people are drawing the right conclusions:

Experts said all surgical specialties should now publish data on death rates.

Follow me @glynmoody on Twitter @glynmoody and identi.ca.

Profits Without Intellectual Monopolies

Great interview with Mr Open Innovation, Eric von Hippel, who has these wise words of advice:


It is true that the most rapidly developing designs are those where many can participate and where the intellectual property is open. Think about open source software as an example of this. What firms have to remember is that they have many ways to profit from good new products, independent of IP. They’ve got brands; they’ve got distribution; they’ve got lead time in the market. They have a lot of valuable proprietary assets that are not dependent on IP.

If you’re going to give out your design capability to others, users specifically, then what you have to do is build your business model on the non-design components of your mix of competitive advantages. For instance, recall the case of custom semiconductor firms I mentioned earlier. Those companies gave away their job of designing the circuit to the user, but they still had the job of manufacturing those user-designed semiconductors, they still had the brand, they still had the distribution. And that’s how they make their money.

Follow me @glynmoody on Twitter @glynmoody and identi.ca.

29 July 2009

RIAA's War on Sharing Begins

Words matter, which is why the RIAA has always framed copyright infringement in terms of "piracy". But it has a big problem: most people call it "sharing"; and as everyone was told by their mother, it's good to share. So the RIAA needs to redefine things, and it seems that it's started doing just that in the Joel Tanenbaum trial:


"We are here to ask you to hold the defendant responsible for his actions," said Reynolds, a partner in the Boulder, Colorado office of Holme, Robert & Owen. "Filesharing isn't like sharing that we teach our children. This isn't sharing with your friends."

Got that? P2P sharing isn't *real* sharing, because it's not sharing with your friends; this is *evil* sharing because it's bad to share with stranger. Apparently.

Watch out for more of this meme in the future.

Follow me @glynmoody on Twitter @glynmoody and identi.ca.

It's Not Open Science if it's Not Open Source

Great to see a scientist come out with this in an interesting post entitled "What, exactly, is Open Science?":


granting access to source code is really equivalent to publishing your methodology when the kind of science you do involves numerical experiments. I’m an extremist on this point, because without access to the source for the programs we use, we rely on faith in the coding abilities of other people to carry out our numerical experiments. In some extreme cases (i.e. when simulation codes or parameter files are proprietary or are hidden by their owners), numerical experimentation isn’t even science. A “secret” experimental design doesn’t give skeptics the ability to repeat (and hopefully verify) your experiment, and the same is true with numerical experiments. Science has to be “verifiable in practice” as well as “verifiable in principle”.

The rest is well worth reading too.

(Via @phylogenomics.)

Follow me @glynmoody on Twitter @glynmoody and identi.ca.

28 July 2009

Why Hackers Will Save the World

For anyone that might be interested, my keynote from the recent Gran Canaria Desktop Summit is now online as an Ogg video.

24 July 2009

Bill Gates Shows His True Identity

And so it starts to come out:


Microsoft is angling to work on India’s national identity card project, Mr. Gates said, and he will be meeting with Nandan Nilekani, the minister in charge. Like Mr. Gates, Mr. Nilekani stopped running the technology company he helped to start, Infosys, after growing it into one of the biggest players in the business. He is now tasked with providing identity cards for India’s 1.2 billion citizens starting in 2011. Right now in India, many records like births, deaths, immunizations and driving violations are kept on paper in local offices.

Mr. Gates was also critical of the United States government’s unwillingness to adopt a national identity card, or allow some businesses, like health care, to centralize data keeping on individuals.

Remind me again why we bother listening to this man...

Follow me @glynmoody on Twitter or identi.ca.

Why the GNU GPL v3 Matters Even More

A little while back, I wrote a post called "Why the GNU GPL Still Matters". I was talking in general terms, and didn't really distinguish between the historical GNU GPL version 2 and the new version 3. That's in part because I didn't really have any figures on how the latter was doing. Now I do, because Matt Asay has just published some plausible estimates:

In July 2007, version 3 of the GNU General Public License barely accounted for 164 projects. A year later, the number had climbed past 2,000 total projects. Today, as announced by Google open-source programs office manager Chris DiBona, the number of open-source projects licensed under GPLv3 is at least 56,000.

And that's just counting the projects hosted at Google Code.

In a hallway conversation with DiBona at OSCON, he told me roughly half of all projects on Google Code use the GPL and, of those, roughly half have moved to GPLv3, or 25 percent of all Google Code projects.

With more than 225,000 projects currently hosted at Google Code, that's a lot of GPLv3.

If we make the reasonable assumption that other open-source project repositories Sourceforge.net and Codehaus have similar GPLv3 adoption rates, the numbers of GPLv3 projects get very big, very fast.

This is important not just because it shows that there's considerable vigour in the GNU GPL licence yet, but because version 3 addresses a particularly hot area at the moment: software patents. The increasing use of GPL v3, with its stronger, more developed response to that threat, is therefore very good news indeed.

Follow me @glynmoody on Twitter or identi.ca.

22 July 2009

Pat "Nutter" Brown Strikes Again

To change the world, it is not enough to have revolutionary ideas: you also have the inner force to be able to realise them in the face of near-universal opposition/indifference/derision. Great examples of this include Richard Stallman, who ploughed his lonely GNU furrow for years before anyone took much notice, and Michael Hart, who did the same for Project Gutenberg.

Another of these rare beings with both vision and tenacity is Pat Brown, a personal hero of mine. Not content with inventing one of the most important experimental tools in genomics - DNA microarrays - Brown decided he wanted to do something ambitious: open access publishing. This urge turned into the Public Library of Science (PLoS) - and even that is just the start:


PLoS is just part of a longer range plan. The idea is to completely change the way the whole system works for scientific communication.

At the start, I knew nothing about the scientific publishing business. I just decided this would be a fun and important thing to do. Mike Eisen, who was a post-doc in my lab, and I have been brain-storming a strategic plan, and PLoS was a large part of it. When I started working on this, almost everyone said, “You are completely out of your mind. You are obviously a complete idiot about how publishing works, and besides, this is a dilettante thing that you're doing.” Which I didn't feel at all.

I know I'm serious about it and I know it's doable and I know it's going to be easy. I could see the thermodynamics were in my favor, because the system is not in its lowest energy state. It's going to be much more economically efficient and serve the customers a lot better being open access. You just need a catalyst to GET it there. And part of the strategy to get it over the energy barrier is to apply heat—literally, I piss people off all the time.

In case you hadn't noticed, that little plan "to completely change the way the whole system works for scientific communication" is coming along quite nicely. So, perhaps buoyed up by this, Brown has decided to try something even more challenging:

Brown: ... I'm going to do my sabbatical on this: I am going to devote myself, for a year, to trying to the maximum extent possible to eliminate animal farming on the planet Earth.

Gitschier: [Pause. Sensation of jaw dropping.]

Brown: And you are thinking I'm out of my mind.

Gitschier: [Continued silence.]

Brown: I feel like I can go a long way toward doing it, and I love the project because it is purely strategy. And it involves learning about economics, agriculture, world trade, behavioral psychology, and even an interesting component of it is creative food science.

Animal farming is by far the most environmentally destructive identified practice on the planet. Do you believe that? More greenhouse production than all transportation combined. It is also the major single source of water pollution on the planet. It is incredibly destructive. The major reason reefs are dying off and dead zones exist in the ocean—from nutrient run-off. Overwhelmingly it is the largest driving force of deforestation. And the leading cause of biodiversity loss.

And if you think I'm bullshitting, the Food and Agricultural Organization of the UN, whose job is to promote agricultural development, published a study, not knowing what they were getting into, looking at the environmental impact of animal farming, and it is a beautiful study! And the bottom line is that it is the most destructive and fastest growing environmental problem.

Gitschier: So what is your plan?

Brown: The gist of my strategy is to rigorously calculate the costs of repairing and mitigating all the environmental damage and make the case that if we don't pay as we go for this, we are just dumping this huge burden on our children. Paying these costs will drive up the price of a Big Mac and consumption will go down a lot. The other thing is to come up with yummy, nutritious, affordable mass-marketable alternatives, so that people who are totally addicted to animal foods will find alternatives that are inherently attractive to eat, so much so that McDonald's will market them, too. I want to recruit the world's most creative chefs—here's a REAL creative challenge!

I've talked with a lot of smart people who are very keen on it actually. They say, “You have no chance of success, but I really hope you're successful.” That's just the kind of project I love.

Pat, the world desperately needs nutters like you. Let's just hope that the thermodynamics are in your favour once more.

Follow me @glynmoody on Twitter or identi.ca.

No Patents for Circuits? Since You Insist...

I love this argument:

Arguments against software patents have a fundamental flaw. As any electrical engineer knows, solutions to problems implemented in software can also be realized in hardware, i.e., electronic circuits. The main reason for choosing a software solution is the ease in implementing changes, the main reason for choosing a hardware solution is speed of processing. Therefore, a time critical solution is more likely to be implemented in hardware. While a solution that requires the ability to add features easily will be implemented in software. As a result, to be intellectually consistent those people against software patents also have to be against patents for electronic circuits.

People seem to think that this is an invincible argument *for* software patents; what the poor darlings fail to notice is that it's actually an invincible argument *against* patents for circuits.

Since software is just algorithms, which is just maths, which cannot be patented, and this clever chap points out that circuits are just software made out of hardware, it follows that we shouldn't allow patents for circuits (but they can still be protected by copyright, just as software can.)

So, thanks for the help in rolling back what is patentable...

21 July 2009

Has Google Forgotten Celera?

One of the reasons I wrote my book Digital Code of Life was that the battle between the public Human Genome Project and the privately-funded Celera mirrored so closely the battle between free software and Microsoft - with the difference that it was our genome that was at stake, not just a bunch of bits. The fact that Celera ultimately failed in its attempt to sequence and patent vast chunks of our DNA was the happiest of endings.

It seems someone else knows the story:

Celera was the company founded by Craig Venter, and funded by Perkin Elmer, which played a large part in sequencing the human genome and was hoping to make a massively profitable business out of selling subscriptions to genome databases. The business plan unravelled within a year or two of the publication of the first human genome. With hindsight, the opponents of Celera were right. Science is making and will make much greater progress with open data sets.

Here are some rea[s]ons for thinking that Google will be making the same sort of mistake as Celera if it pursues the business model outlined in its pending settlement with the AAP and the Author's Guild....

Thought provoking stuff, well worth a read.

Follow me @glynmoody on Twitter or identi.ca.

Building on Open Data

One of the great things about openness is that it lets people do incredible things by adding to it in a multiplicity of ways. The beatuy is that those releasing material don't need to try to anticipate future uses: it's enough that they make it as open as possible Indeed, the more open they make it, the more exciting the re-uses will be.

Here's an unusual example from the field of open data, specifically, the US government data held on Data.gov:


The purpose of Data.gov is to increase public access to high value, machine readable datasets generated by the Executive Branch of the Federal Government. Although the initial launch of Data.gov provides a limited portion of the rich variety of Federal datasets presently available, we invite you to actively participate in shaping the future of Data.gov by suggesting additional datasets and site enhancements to provide seamless access and use of your Federal data. Visit today with us, but come back often. With your help, Data.gov will continue to grow and change in the weeks, months, and years ahead.


Here's how someone intends to go even further:

Today I’m happy to announce Sunlight Labs is stealing an idea from our government. Data.gov is an incredible concept, and the implementation of it has been remarkable. We’re going to steal that idea and make it better. Because of politics and scale there’s only so much the government is going to be able to do. There are legal hurdles and boundaries the government can’t cross that we can. For instance: there’s no legislative or judicial branch data inside Data.gov and while Data.gov links off to state data catalogs, entries aren’t in the same place or format as the rest of the catalog. Community documentation and collaboration are virtual impossibilities because of the regulations that impact the way Government interacts with people on the web.

We think we can add value on top of things like Data.gov and the municipal data catalogs by autonomously bringing them into one system, manually curating and adding other data sources and providing features that, well, Government just can’t do. There’ll be community participation so that people can submit their own data sources, and we’ll also catalog non-commercial data that is derivative of government data like OpenSecrets. We’ll make it so that people can create their own documentation for much of the undocumented data that government puts out and link to external projects that work with the data being provided.

This the future.

Follow me @glynmoody on Twitter or identi.ca.

20 July 2009

British Library Turns Traitor

I knew the British Library was losing its way, but this is ridiculous:

The British Library Business & IP Centre at St Pancras, London can help you start, run and grow your business.

And how might it do that?

Intellectual property can help you protect your ideas and make money from them.

Our resources and workshops will guide you through the four types of intellectual property: patents, trade marks, registered designs and copyright.

This once-great institution used to be about opening up the world's knowledge for the benefit and enjoyment of all: today, it's about closing it down so that only those who can afford to pay get to see it.

What an utter disgrace.

Follow me @glynmoody on Twitter or identi.ca.