Showing posts with label library at alexandria. Show all posts
Showing posts with label library at alexandria. Show all posts

10 June 2012

Spotify In A Box: Why Sharing Will Never Be Stopped

Most people will be familiar with Moore's Law, usually stated in the form that processing power doubles every two years (or 18 months in some versions.) But just as important are the equivalent compound gains for storage and connectivity speeds, sometimes known as Kryder's Law and Nielsen's Law respectively. 

On Techdirt.

07 March 2012

Why Digital Texts Need A New Library Of Alexandria -- With Physical Books

Amidst the growing enthusiasm for digital texts -- ebooks and scans of illustrated books -- it's easy to overlook some important drawbacks. First, that you don't really own ebooks, as various unhappy experiences with Amazon's Kindle have brought home. Secondly, that a scan of an illustrated book is only as good as the scanning technology that is available when it is made: there's no way to upgrade a scan to higher quality images without rescanning the whole thing. 

On Techdirt.

29 December 2009

The Lost Decades of the UK Web

This is a national disgrace:

New legal powers to allow the British Library to archive millions of websites are to be fast-tracked by ministers after the Guardian exposed long delays in introducing the measures.

The culture minister, Margaret Hodge, is pressing for the faster introduction of powers to allow six major libraries to copy every free website based in the UK as part of their efforts to record Britain's cultural, scientific and political history.

The Guardian reported in October that senior executives at the British Library and National Library of Scotland (NLS) were dismayed at the government's failure to implement the powers in the six years since they were established by an act of parliament in 2003.

The libraries warned that they had now lost millions of pages recording events such as the MPs' expenses scandal, the release of the Lockerbie bomber and the Iraq war, and would lose millions more, because they were not legally empowered to "harvest" these sites.

So, 20 years after Sir Tim Berners-Lee invented the technology, and well over a decade after the Web became a mass medium, and the British Library *still* isn't archiving every Web site?

History - assuming we have one - will judge us harshly for this extraordinary UK failure to preserve the key decades of the quintessential technology of our age. It's like burning down a local digital version of the Library of Alexandria, all over again.

Follow me @glynmoody on Twitter or identi.ca.

28 November 2007

Millions of Book Projects

There are so many book-scanning projects underway at the moment that it's hard to keep up. Google's may have the highest profile, but it suffers from the big problem that it won't make full texts routinely available. That's not the case for the Universal Digital Library, aka the Million Book Project - a name that's no longer appropriate:

The Million Book Project, an international venture led by Carnegie Mellon University in the United States, Zhejiang University in China, the Indian Institute of Science in India and the Library at Alexandria in Egypt, has completed the digitization of more than 1.5 million books, which are now available online.

For the first time since the project was initiated in 2002, all of the books, which range from Mark Twain’s “A Connecticut Yankee in King Arthur’s Court” to “The Analects of Confucius,” are available through a single Web portal of the Universal Library (www.ulib.org), said Gloriana St. Clair, Carnegie Mellon’s dean of libraries.

“Anyone who can get on the Internet now has access to a collection of books the size of a large university library,” said Raj Reddy, professor of computer science and robotics at Carnegie Mellon. “This project brings us closer to the ideal of the Universal Library: making all published works available to anyone, anytime, in any language. The economic barriers to the distribution of knowledge are falling,” said Reddy, who has spearheaded the Million Book Project.

Though Google, Microsoft and the Internet Archive all have launched major book digitization projects, the Million Book Project represents the world’s largest, university-based digital library of freely accessible books. At least half of its books are out of copyright, or were digitized with the permission of the copyright holders, so the complete texts are or eventually will be available free.

The main problem with the site seems to be insufficient computing wellie: I keep on getting "connection timed out" when I try to use it. Promising, nonetheless. (Via Open Access News.)

Update: Here's a good post on some of the issues surrounding book projects.