Showing posts with label Wired. Show all posts
Showing posts with label Wired. Show all posts

30 December 2009

What Took Wired So Loongson?

I've been writing about the Loongson chip for three years now. As I've noted several times, this chip is important because (a) it's a home-grown Chinese chip (albeit based on one from MIPS) and (b) Windows doesn't run on it, but GNU/Linux does.

It looks like Wired magazine has finally woken up to the story (better late than never):


Because the Loongson eschews the standard x86 chip architecture, it can’t run the full version of Microsoft Windows without software emulation. To encourage adoption of the processor, the Institute of Computing Technology is adapting everything from Java to OpenOffice for the Loongson chip and releasing it all under a free software license. Lemote positions its netbook as the only computer in the world with nothing but free software, right down to the BIOS burned into the motherboard chip that tells it how to boot up. It’s for this last reason that Richard “GNU/Linux” Stallman, granddaddy of the free software movement, uses a laptop with a Loongson chip.

Because GNU/Linux distros have already been ported to the Loongson chip, neither Java nor OpenOffice.org needs "adapting" so much as recompiling - hardly a challenging task. As for "releasing it all under a free software license", they had no choice.

But at least Wired got it right about the potential impact of the chip:

Loongson could also reshape the global PC business. “Compared to Intel and IBM, we are still in the cradle,” concedes Weiwu Hu, chief architect of the Loongson. But he also notes that China’s enormous domestic demand isn’t the only potential market for his CPU. “I think many other poor countries, such as those in Africa, need low-cost solutions,” he says. Cheap Chinese processors could corner emerging markets in the developing world (and be a perk for the nation’s allies and trade partners).

And that’s just the beginning. “These chips have implications for space exploration, intelligence gathering, industrialization, encryption, and international commerce,” says Tom Halfhill, a senior analyst for Microprocessor Report.

Yup.

Follow me @glynmoody on Twitter or identi.ca.

01 June 2009

Why Security by Obscurity Fails, Part 674

Great story in Wired about a master lock-picker, opening what are supposedly the most secure locks in the world:

These were the same Medeco locks protecting tens of thousands of doors across the planet

...

One by one, brand-new Medeco locks were unsealed. And, as the camera rolled, one by one these locks were picked open. None of the Medeco3 locks lasted the minimum 10 to 15 minutes necessary to qualify for the "high security" rating. One was cracked in just seven seconds. By Roberson's standards, Tobias and Bluzmanis had done the impossible.

Although these are physical, rather than software locks, the lesson is the same: there is no such thing as an unpickable lock, there is no such thing as unhackable software, even if it's closed and encrypted. Since *someone* will be able to find the flaws in your software, you may as well open it open so that they can be found and fixed. Go open source.

09 March 2009

Wired's Open Government Data Wiki

Wired has an idea:

If you're a fan of free data flow into and out of the government, Vivek Kundra seems like an ally. But we can't rest on our laurels. Now is exactly the time when lobbying for particular data and documents to be made accessible could be most effective.

Data.gov is coming: Let's help build it.

The solution? You - and a wiki:

We've established this wiki to help focus attention on valuable data resources that need to be made more accessible or usable. Do you know of a legacy dataset in danger of being lost? How about a set of Excel (or — shudder — Lotus 1-2-3) spreadsheets that would work better in another format? Data locked up in PDF's?

This is your place to report where government data is locked up by design, neglect or misapplication of technology. We want you to point out the government data that you need or would like to have. Get involved!

Based on what you contribute here, we'll follow up with government agencies to see what their plans are for that data — and track the results of the emerging era of Data.gov.

With your help, we can combine the best of new social media and old-school journalism to get more of the data we've already paid for in our hands.

We could do with something similar here: Free Our Data, are you listening?

27 July 2008

The Church of Openness

In Digital Code of Life, I explained at length - some would say at excessive length - how the Human Genome Project was a key early demonstration of the transformative power of openness. Here's one of the key initiators of that project, George Church, who wants to open up genomics even more. Why? Because:

Exponentials don't just happen. In Church's work, they proceed from two axioms. The first is automation, the idea that by automating human tasks, letting a computer or a machine replicate a manual process, technology becomes faster, easier to use, and more popular. The second is openness, the notion that sharing technologies by distributing them as widely as possible with minimal restrictions on use encourages both the adoption and the impact of a technology.

And Church believes in openness so much, he's even applying to his sequencer:

In the past three years, more companies have joined the marketplace with their own instruments, all of them driving toward the same goal: speeding up the process of sequencing DNA and cutting the cost. Most of the second-generation machines are priced at around $500,000. This spring, Church's lab undercut them all with the Polonator G.007 — offered at the low, low price of $150,000. The instrument, designed and fine-tuned by Church and his team, is manufactured and sold by Danaher, an $11 billion scientific-equipment company. The Polonator is already sequencing DNA from the first 10 PGP volunteers. What's more, both the software and hardware in the Polonator are open source. In other words, any competitor is free to buy a Polonator for $150,000 and copy it. The result, Church hopes, will be akin to how IBM's open-architecture approach in the early '80s fueled the PC revolution.

03 July 2008

In Praise of Wikileaks

Nice background piece on Wired.

25 February 2008

The Value of Nothing

One of those joining this blog in pointing out the power of pricing at zero is Chris Anderson. His next book is called simply "Free", and he's published a convenient synopsis in the form of an article in his personal publishing vehicle, Wired:

It took decades to shake off the assumption that computing was supposed to be rationed for the few, and we're only now starting to liberate bandwidth and storage from the same poverty of imagination. But a generation raised on the free Web is coming of age, and they will find entirely new ways to embrace waste, transforming the world in the process.

Judging by the article, the book will be highly anecdotal - no bad thing for a populist tome. My only concern is that the emphasis will be too much on the "free as in beer" side, neglecting the fact that the "free as in freedom" aspect is actually even more important.

01 January 2008

In a Bit of a Scrape

"Mix and mash" lies at the heart of the power of openness: it allows people to come up with new, often better, uses of data, notably by cross-referencing it with yet more of the stuff to create higher levels of information. But as this Wired feature details, there is a tension between what the scrapers and the scraped want:

there's an awkward dance going on, an unregulated give-and-take of information for which the rules are still being worked out. And in many cases, some of the big guys that have been the source of that data are finding they can't — or simply don't want to — allow everyone to access their information, Web2.0 dogma be damned. The result: a generation of businesses that depend upon the continued good graces of a relatively small group of Internet powerhouses that philosophically agree information should be free — until suddenly it isn't.

Striking the right balance is tricky, but I think there's a way out. After all, if everyone can use everyone's data, there's a quid pro quo. The problem is when some give and others only take. (Via ReadWriteWeb.)

06 December 2007

Wired Uses the 'B'-word

I write about commons a lot here - digital commons, analogue commons - and about how we can nurture them. Whales form a commons, and one that came perilously close to becoming a tragedy. Which is why Japan's resumption of commercial whaling under a flimsy pretext of "scientific" whaling sticks in my craw. Obviously, I'm not the only one; here's the Chief Copy at Wired:

But more and more the Japanese are turning to the cultural-tradition defense, a blatant if clumsy attempt to portray themselves as the victims of cultural prejudice. That, too, is bilge water. This is no time for the world to cave in to some misguided sense of political correctness. On the contrary, pressure should be applied to stop. If Japan won't stop, a boycott of Japanese goods would not be unreasonable.

Oooh, look: there's the "b"-word: I predict we'll be hearing a lot more of it if Japan persists in this selfish destruction of a global commons.

01 March 2007

Undermining Digg

Digg occupies such an emblematic place in the Web 2.0 world that it's important to understand what's really going on with this increasingly powerful site (on the rare occasions that I've had stories dugg, my traffic has been stratospheric for a day or two before sagging inexorably down to its usual footling levels.)

So this story from Annalee Newitz on Wired News is at once fascinating and frightening:

I can tell you exactly how a pointless blog full of poorly written, incoherent commentary made it to the front page on Digg. I paid people to do it. What's more, my bought votes lured honest Diggers to vote for it too. All told, I wound up with a "popular" story that earned 124 diggs -- more than half of them unpaid. I also had 29 (unpaid) comments, 12 of which were positive.

Although it's worrying that Digg can be gamed so easily, there's hope too:

Ultimately, however, my story did get buried. If you search for it on Digg, you won't find it unless you check the box that says "also search for buried stories." This didn't happen because the Digg operators have brilliant algorithms, however -- it happened because many people in the Digg community recognized that my blog was stupid. Despite the fact that it was rapidly becoming popular, many commenters questioned my story's legitimacy. Digg's system works only so long as the crowds on Digg can be trusted.

Digg remains a fascinating experiment in progress; let's hope it works out.

23 February 2007

Fake Steve Jobs: Suck 2.0?

I was heartened to see that the future of the Fake Steve Jobs blog now seems assured, following a deal with Wired (kudos). God knows we need more such snarky sites in an increasingly humourless and pusillanimous world.

Taking advantage of this new stability, I settled down to read a few of the many postings that I'd missed, and a distant cyber-bell began ringing. I thought: "I have been here before... I know the grass beyond the door", and then it struck me.

Once upon a time, in an online world far away, there was a little Web site called Suck. Its motto:

"a fish, a barrel, and a smoking gun"

It came out of nowhere, starting on 28 August 1995, and ran for nearly six years. It was wonderful: it deflated the growing hype bubble that was Dotcom 1.0, and it did it with cool, mordant wit. (If you want the whole, roller-coaster story of Suck, read it here in Wired - rather appopriately, as it turns out.)

FSJ is Suck 2.0: it punctures that which must be punctured, and it does it with a different kind of wit, this time black and scabrous. But along with the similarities, there is an important difference between the two sites.

Suck, for all its undoubted virtues, took itself far too seriously, as any adolescent genius might. FSJ, by contrast, is more mature, more cynical; it takes nothing seriously, least of all itself (it is a parody site, after all). In other words, FSJ is the perfect mirror for the Web 2.0 world we live in.

Namaste, Steve.

19 January 2007

It Ain't Over Until Blake Ross Sings

There are three names that most people would associate with Firefox. Ben Goodger, who works for Google, and whose blog is pretty quiet these days. Asa Dotzler, who has a articulate and bulging blog. And then there's Blake Ross, also with a lively blog, but probably better known for being the cover-boy of Wired when it featured Firefox.

Given his background - and the immense knock-on effect his Firefox work has had - Ross is always worth listening to. That's particularly the case for this long interview, because it's conducted for the Opera Watch blog, which lends it both a technological depth and a subtle undercurrent of friendly competition:

I think Opera is better geared toward advanced users out of the box, whereas Firefox is tailored to mainstream users by default and relies on its extension model to cater to an advanced audience. However, I see both browsers naturally drifting toward the middle. Firefox is growing more advanced as the mainstream becomes Web-savvier, and I see Opera scaling back its interface, since it started from the other end of the spectrum.

(Via LXer.)

22 November 2006

TV's Spiralling Vortex of Ruin

Sounds good to me, in a double sense: form and content. (Via IP Democracy.)

21 September 2006

Open Prosthetics

Here's a fascinating project: Open Prosthetics. It's exactly what it says, free designs for prosthetics, although the exact licensing isn't entirely clear (anyone?). The back-story is told in Wired. (Via BoingBoing.)

30 August 2006

Wired's Wikified Wiki Words Work?

This is one of those things that you just want to work.

Wired has put up one of its stories - on wikis - to be freely edited by anyone. Or rather anyone who registers: this seems to be a threshold requirement to stop random vandalism as experienced the last time this was tried.

Judging by the results, the registration barrier seems to be working. The piece is eminently readable, and shows no evidence (as I write) of desecration. Maybe the wiki world is growing up. (via Many-to-Many.)

21 July 2006

Something's Rotten in the Domain Name System

Although I can't quite claim to go back to the very first commercial domain, I do remember the Wired story about how many major US corporations had neglected to register relevant domains. And I also remember how around $7.5 million was paid for the utterly generic and pointless business.com domain.

So I've seen a thing or two. And yet I can still be disgusted by the depths to which the scammers can sink when it comes to domain names. Try this, for example: a company that seems to be magically reserving domain names shortly after people have entered them as a Whois search - only to dump it if it doesn't pull in any traffic.

It's this kind of parasitical business model that is pushing the domain name system close to breakdown, and making the Internet far less efficient than it could be.

19 May 2006

Linus Speaks

Linus rarely gives interviews (I hit very lucky some ten years ago). So this one, on CNN, is something of a rarity. Nothing new, but it's not bad as an intro to the man and his methods. (Via LXer.)

04 April 2006

Coughing Genomic Ink

One of the favourite games of scholars working on ancient texts that have come down to us from multiple sources is to create a family tree of manuscripts. The trick is to look for groups of textual divergences - a word added here, a mis-spelling there - to spot the gradual accretions, deletions and errors wrought by incompetent, distracted or bored copyists. Once the tree has been established, it is possible to guess what the original, founding text might have looked like.

You might think that this sort of thing is on the way out; on the contrary, though, it's an extremely important technique in bioinformatics - hardly a dusty old discipline. The idea is to treat genomes deriving from a common ancestor as a kind of manuscript, written using just the four letters - A, C, G and T - found in DNA.

Then, by comparing the commonalities and divergences, it is possible to work out which manuscripts/genomes came from a common intermediary, and hence to build a family tree. As with manuscripts, it is then possible to hazard a guess at what the original text - the ancestral genome - might have looked like.

That, broadly, is the idea behind some research that David Haussler at the University of California at Santa Cruz is undertaking, and which is reported on in this month's Wired magazine (freely available thanks to the magazine's enlightened approach to publishing).

As I described in Digital Code of Life, Haussler played an important role in the closing years of the Human Genome Project:

Haussler set to work creating a program to sort through and assemble the 400,000 sequences grouped into 30,000 BACs [large-scale fragments of DNA] that had been produced by the laboratories of the Human Genome Project. But in May 2000, when one of his graduate students, Jim Kent, inquired how the programming was going, Haussler had to admit it was not going well. Kent had been a professional programmer before turning to research. His experience in writing code against deadlines, coupled with a strongly-held belief that the human genome should be freely available, led him to volunteer to create the assembly program in short order.

Kent later explained why he took on the task:

There was not a heck of a lot that the Human Genome Project could say about the genome that was more informative than 'it's got a lot of As, Cs, Gs and Ts' without an assembly. We were afraid that if we couldn't say anything informative, and thereby demonstrate 'prior art', much of the human genome would end up tied up in patents.

Using 100 800 MHz Pentiums - powerful machines in the year 2000 - running GNU/Linux, Kent was able to lash up a program, assemble the fragments and save the human genome for mankind.

Haussler's current research depends not just on the availability of the human genome, but also on all the other genomes that have been sequenced - the different manuscripts written in DNA that have come down to us. Using bioinformatics and even more powerful hardware than that available to Kent back in 2000, it is possible to compare and contrast these genomes, looking for tell-tale signs of common ancestors.

But the result is no mere dry academic exercise: if things go well, the DNA text that will drop out at the end will be nothing less than the genome of one of our ancient forebears. Even if Wired's breathless speculations about recreating live animals from the sequence seem rather wide of the mark - imagine trying to run a computer program recreated in a similar way - the genome on its own will be treasure enough. Certainly not bad work for those scholars who "cough in ink" in the world of open genomics.

29 March 2006

Linus Torvalds' First Usenet Posting

It was 15 years ago today that Linus made his first Usenet posting, to the comp.os.minix newsgroup. This is how it began:

Hello everybody,
I've had minix for a week now, and have upgraded to 386-minix (nice), and duly downloaded gcc for minix. Yes, it works - but ... optimizing isn't working, giving an error message of "floating point stack exceeded" or something. Is this normal?

Minix was the Unix-like operating system devised by Andy Tanenbaum as a teaching aid, and gcc a key hacker program that formed part of Stallman's GNU project. Linus' question was pretty standard beginner's stuff, and yet barely two days later, he answered a fellow-newbie's question as if he were some Minix wizard:

RTFSC (Read the F**ing Source Code :-) - It is heavily commented and the solution should be obvious (take that with a grain of salt, it certainly stumped me for a while :-).

He may have been slightly premature in according himself this elevated status, but it wasn't long before he not only achieved it but went far beyond. For on Sunday, 25 August, 1991, he made another posting to the comp.os.minix newsgroup:

Hello everybody out there using minix -
I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready.

The hobby, of course, was Linux, and this was its official announcement to the world.

But imagine, now, that Linus had never made that first posting back in March 1991. It could have happened: as Linus told me in 1996 when I interviewed him for a feature in Wired, back in those days

I was really so shy I didn't want to speak in classes. Even just as a student I didn't want to raise my hand and say anything.

It's easy to imagine him deciding not to “raise his hand” in the comp.os.minix newsgroup for fear of looking stupid in front of all the Minix experts (including the ultimate professor of computing, Tanenbaum himself). And if he'd not plucked up courage to make that first posting, he probably wouldn't have made the others or learned how to hack a simple piece of code he had written for the PC into something that grew into the Linux kernel.

What would the world look like today, had Linux never been written? Would we be using the GNU Hurd – the kernel that Stallman intended to use originally for his GNU operating system, but which was delayed so much that people used Linux instead? Or would one of the BSD derivatives have taken off instead?

Or perhaps there would simply be no serious free alternative to Microsoft Windows, no open source movement, and we would be living in a world where computing was even more under the thumb of Bill Gates. In this alternative reality, there would be no Google either, since it depends on the availability of very low-cost GNU/Linux boxes for the huge server farms that power all its services.

It's amazing how a single post can change the world.