I like reading Nicholas Carr's stuff because it is often provocative and generally thought-provoking. A good example is his recent "Ignorance of Crowds" which asserts:
Wikipedia’s problems seem to stem from the fact that the encyclopedia lacks the kind of strong central authority that exerts quality control over the work of the Linux crowd. The contributions of Wikipedia’s volunteers go directly into the product without passing through any editorial filter. The process is more democratic, but the quality of the product suffers.
I think this misses a key point about the difference between open source and open content that has nothing to do with authority. Software has clear metrics for success: the code runs faster, requires less memory, or is less CPU-intensive, etc. There is no such metric for content, where it essentially comes down to matters of opinion much of the time. Without a metric, monotonic improvement is impossible to achieve: the best you can hope for is a series of jumps that may or may not make things "better" - whatever that means in this context.
This is an important issue for many domains where the open source "method" is being applied: the better the metric available, the more sustained and unequivocal the progress will be. For example, the prospects for open science, powered by open access + open data, look good, since a general metric is available in the form of fit of theory to experiment.