Showing posts with label metrics. Show all posts
Showing posts with label metrics. Show all posts

31 January 2008

Amazing Mozilla Metrics

Much of the power and popularity of Firefox and, to a lesser extent, Thunderbird, come from their addons. This shows just how powerful and popular:

Earlier this week, [addons.mozilla.org] served its 600 millionth add-on download. That’s original downloads, not including updates. We currently have over 4000 add-ons hosted on the site and between 800,000 and 1 million downloads every day. The site has around 4.5 million pageviews per day, not including services hosted on AMO such as update checks and blocklisting.

AMO now receives around 100 million add-on update pings every day, which means that of those 600 million downloads, about 100 million add-ons are still installed.

Amazing Mozilla. (Via Mozilla Links.)

03 January 2008

Open Source: A Question of Metrics

Here's a characteristically thought-provoking post from Chris Messina:

I’ve probably said it before, and will say it again, and I’m also sure that I’m not the first, or the last to make this point, but I have yet to see an example of an open source design process that has worked.

Indeed, I’d go so far as to wager that “open source design” is an oxymoron. Design is far too personal, and too subjective, to be given over to the whims and outrageous fancies of anyone with eyeballs in their head.

Call me elitist in this one aspect, but with all due respect to code artistes, it’s quite clear whether a function computes or not; the same quantifiable measures simply do not exist for design and that critical lack of objective review means that design is a form of Art, and its execution should be treated as such.

What interests me is that he's actually articulating something deep about the open source methodology: it can only be usefully applied when there is a metric that lets you judge objectively when things get better.

That's why free software works: you take some code and improve it - making it faster, more compact, less buggy - or, ideally, all three. It's why collaborative novels and symphonies rarely work. There's no clear way to improve on what's already there: anyone can tinker, but there will always be different views on whether that tinkering works. It's also why why Wikipedia more or less works: although based on facts and their citations, there's still plenty of room for disagreement over how they should be presented.

29 May 2007

The Wisdom of Metrics

I like reading Nicholas Carr's stuff because it is often provocative and generally thought-provoking. A good example is his recent "Ignorance of Crowds" which asserts:

Wikipedia’s problems seem to stem from the fact that the encyclopedia lacks the kind of strong central authority that exerts quality control over the work of the Linux crowd. The contributions of Wikipedia’s volunteers go directly into the product without passing through any editorial filter. The process is more democratic, but the quality of the product suffers.

I think this misses a key point about the difference between open source and open content that has nothing to do with authority. Software has clear metrics for success: the code runs faster, requires less memory, or is less CPU-intensive, etc. There is no such metric for content, where it essentially comes down to matters of opinion much of the time. Without a metric, monotonic improvement is impossible to achieve: the best you can hope for is a series of jumps that may or may not make things "better" - whatever that means in this context.

This is an important issue for many domains where the open source "method" is being applied: the better the metric available, the more sustained and unequivocal the progress will be. For example, the prospects for open science, powered by open access + open data, look good, since a general metric is available in the form of fit of theory to experiment.