Showing posts with label open source. Show all posts
Showing posts with label open source. Show all posts

15 February 2023

Incoming: Spare Slots for Freelance Work in 2023


I will soon have spare slots in my freelance writing schedule for regular weekly or monthly work, and major projects. Here are the main areas that I've been covering, some for nearly three decades. Any commissioning editors interested in talking about them or related subjects, please contact me at glyn.moody@gmail.com. I am also available to speak on these topics at relevant conferences around in the world, something I have done many times in the past. 

Privacy, Surveillance, Encryption, Freedom of Speech 

Over the last decade, I have written hundreds of articles about these crucial areas, for Techdirt, Privacy News Online, and Ars Technica. Given the increasing challenges facing society in these areas, they will remain an important focus for my work in the future. 

Copyright

I have also written many hundreds of articles about copyright. These have been mainly for Techdirt, where I have published nearly 1,900 posts, CopyBuzz, and Walled Culture. Most recently, I have written a 300-page book, also called Walled Culture, detailing the history of digital copyright, its huge problems, and possible solutions. Free ebook versions of its text are available

EU Tech Policy and EU Trade Agreements: DSA, DMA, TTIP, CETA 

I have written about EU tech policy for CopyBuzz, focussing on the EU Copyright Directive, and for Privacy News Online, dealing with major initiatives such as the Digital Services Act, the Digital Markets Act, and the Artificial Intelligence Act. Another major focus of my writing has been so-called "trade agreements" like TTIP, CETA, TPP and TISA. "So-called", because they go far beyond traditional discussions of tariffs, and have major implications for many areas normally subject to democratic decision making, notably tech policy. In addition to 51 TTIP Updates that I originally wrote for Computerworld UK. I have covered this area extensively for Techdirt and Ars Technica, including a major feature on TTIP for the latter. 

Free Software/Open Source

I started covering this topic in 1995, wrote the first mainstream article on Linux for Wired in 1997, and the first (and still only) detailed history of the subject, Rebel Code: Linux and the Open Source Revolution in 2001, for which I interviewed the world’s top 50 hackers at length. 

Open Access, Open Data, Open Science, Open Government, Open Everything 

As the ideas underlying openness, sharing and online collaboration have spread, so has my coverage of them, particularly for Techdirt. I wrote one of the most detailed histories of Open Access, for Ars Technica, and its history and problems also form Chapter 3 of my book Walled Culture, mentioned above. 

Europe 

As a glance at some of my 580,000 (sic) posts to Twitter, and 18,000 posts on Mastodon, will indicate, I read news sources in a number of languages (Italian, German, French, Russian, Spanish, Portuguese, and Georgian in descending order of capability.) This means I can offer a fully European perspective on any of the topics above - something that may be of interest to publications wishing to provide global coverage that goes beyond purely anglophone reporting. The 25,000 or so followers that I have across these social networks also means that I can push out links to my articles, something that I do as a matter of course to boost their readership and encourage engagement. 


London 2023

08 January 2018

Incoming: Spare Slots for Freelance Work in 2018


I will soon have spare slots in my freelance writing schedule for regular weekly or monthly work, and major projects. Here are the main areas that I've been covering, some for more than two decades. Any commissioning editors interested in talking about them or related subjects, please contact me at glyn.moody@gmail.com (PGP available).  I am also available to speak on these topics at relevant conferences.

Surveillance, Encryption, Privacy, Freedom of Speech

For the last two years, I have written hundreds of articles about these crucial areas, for Ars Technica UK (http://arstechnica.co.uk/author/glyn_moody/), Privacy News Online (https://www.privateinternetaccess.com/blog/author/glynmoody/) and Techdirt (https://www.techdirt.com/user/glynmoody). Given the challenges facing society this year, they are likely to be an important focus for my work in 2018.

China

Another major focus for me this year will be China. I follow the world of Chinese IT closely, and have written numerous articles on the topic. Since I can read sources in the original, I am able to spot trends early and to report faithfully on what are arguably some of the most important developments happening in the digital world today.

Free Software/Open Source

I started covering this topic in 1995, wrote the first mainstream article on Linux for Wired in 1997 (https://www.wired.com/1997/08/linux-5/), and the first (and still only) detailed history of the subject, Rebel Code (https://en.wikipedia.org/wiki/Rebel_Code) in 2001, where I interviewed the top 50 hackers at length. I have also written about the open source coders and companies that have risen to prominence in the last decade and a half, principally in my Open Enterprise column for Computerworld UK, which ran from 2008 to 2015.

Open Access, Open Data, Open Science, Open Government, Open Everything

As the ideas underlying openness, sharing and online collaboration have spread, so has my coverage of them. I wrote one of the most detailed histories of Open Access, for Ars Technica (http://arstechnica.com/science/2016/06/what-is-open-access-free-sharing-of-all-human-knowledge/).

Copyright, Patents, Trade Secrets

The greatest threat to openness is its converse: intellectual monopolies, which prevent sharing. This fact has led me to write many articles about copyright, patents and trade secrets. These have been mainly for Techdirt, where I have published over 1,500 posts, and also include an in-depth feature on the future of copyright for Ars Technica (http://arstechnica.co.uk/tech-policy/2015/07/copyright-reform-for-the-digital-age/).

Trade Agreements - TTIP, CETA, TISA, TPP

Another major focus of my writing has been so-called "trade agreements" like TTIP, CETA, TPP and TISA. "So-called", because they go far beyond traditional discussions of tariffs, and have major implications for many areas normally subject to democratic decision making. In addition to 51 TTIP Updates that I originally wrote for Computerworld UK (http://opendotdotdot.blogspot.nl/2016/01/the-rise-and-fall-of-ttip-as-told-in-51.html), I have covered this area extensively for Techdirt and Ars Technica UK, including a major feature on TTIP (http://arstechnica.co.uk/tech-policy/2015/05/ttip-explained-the-secretive-us-eu-treaty-that-undermines-democracy/) for the latter.

Europe

As a glance at some of my 318,000 (sic) posts to Twitter, identi.ca and Google+ will indicate, I read news sources in a number of languages (Italian, German, French, Spanish, Russian, Portuguese, Dutch, Greek, Swedish in descending order of capability.) This means I can offer a fully European perspective on any of the topics above - something that may be of interest to publications wishing to provide global coverage that goes beyond purely anglophone reporting. The 30,000 or so followers that I have across these social networks also means that I can push out links to my articles, something that I do as a matter of course to boost their readership.

04 January 2017

Spare Slots for Regular Freelance Work Soon Available


I may soon have spare slots in my freelance writing schedule for regular work, or for larger, longer-term projects. Here are the main areas that I've been covering, some for more than two decades. Any commissioning editors interested in talking about them or related subjects, please contact me at glyn.moody@gmail.com (PGP available).

Digital Rights, Surveillance, Encryption, Privacy, Freedom of Speech

During the last two years, I have written hundreds of articles about these crucial areas, for Ars Technica UK and Techdirt. Given the challenges facing society this year, they are likely to be an important area for 2017.

China

Another major focus for me this year will be China. I follow the world of Chinese IT closely, and have written numerous articles on the topic for Techdirt and Ars Technica. Since I can read sources in the original, I am able to spot trends early and to report faithfully on what are arguably some of the most important developments happening in the digital world today.

Free Software/Open Source

I started covering this topic in 1995, wrote the first mainstream article on Linux, for Wired in 1997 and the first (and still only) detailed history of the subject, Rebel Code, in 2001, where I interviewed the top 50 hackers at length. I have also written about the open source coders and companies that have risen to prominence in the last decade and a half, principally in my Open Enterprise column for Computerworld UK, which ran from 2008 to 2015.

Open Access, Open Data, Open Science, Open Government, Open Everything

As the ideas underlying openness, sharing and online collaboration have spread, so has my coverage of them. I recently wrote one of  the most detailed histories of Open Access, for Ars Technica.

Copyright, Patents, Trademarks, Trade Secrets

The greatest threat to openness is its converse: intellectual monopolies. This fact has led me to write many articles about copyright, patents and trade secrets. These have been mainly for Techdirt, where I have published over 1,400 posts, and also include an in-depth feature on the future of copyright for Ars Technica.

Trade Agreements - TTIP, CETA, TISA, TPP

Another major focus of my writing has been so-called "trade agreements" like TTIP, CETA, TPP and TISA. "So-called", because they go far beyond traditional discussions of tariffs, and have major implications for many areas normally subject to democratic decision making. In addition to 51 TTIP Updates that I originally wrote for Computerworld UK, I have covered this area extensively for Techdirt and Ars Technica UK, including a major feature on TTIP for the latter.

Europe

As a glance at some of my 244,000 (sic) posts to Twitter, identi.ca, Diaspora, and Google+ will indicate, I read news sources in a number of languages (Italian, German, French, Spanish, Russian, Portuguese, Dutch, Greek, Swedish in descending order of capability.) This means I can offer a fully European perspective on any of the topics above - something that may be of interest to publications wishing to provide global coverage that goes beyond purely anglophone reporting. The 30,000 or so followers that I have across these social networks also means that I can push out links to my articles, something that I do as a matter of course to boost their impact and readership.

26 July 2014

Microsoft Goes Open Access; When Will It Go Open Source?

Even though Microsoft is no longer the dominant player or pacesetter in the computer industry -- those roles are shared by Google and Apple these days -- it still does interesting work through its Microsoft Research arm. Here's some welcome news from the latter: it's moving to open access for its researchers' publications

On Techdirt.

25 July 2014

Open Source Genomics

There's a revolution underway. It's digital, but not in the computing sector. I'm referring to the world of genomics, which deals with the data that resides inside all living things: DNA. As most people know, DNA uses four chemical compounds - adenine, cytosine, guanine and thymine - to encode various structures, most notably proteins, which are represented by stretches of DNA called genes. 

On Open Enterprise blog.

24 July 2014

Resisting Surveillance on a Unprecedented Scale III

(The previous two parts of this essay appeared earlier.)

Or maybe not. There is a rough consensus among cryptography experts that the theoretical underpinnings of encryption - the mathematical foundations - remain untouched. The problem lies in the implementation and the environment in which encryption is used. Edward Snowden probably knows better than most what the true situation is, and here's how he put it:

Encryption works. Properly implemented strong crypto systems are one of the few things that you can rely on. Unfortunately, endpoint security is so terrifically weak that NSA can frequently find ways around it.

That's a hugely important clue as to what we need to do. It tells us that there is nothing wrong with crypto as such, just the corrupted implementations of otherwise strong encryption techniques. That is confirmed by recent leaks of information that show computer software companies complicit in weakening the supposedly safe products they sell - truly a betrayal of the trust placed in them by their customers.

The good news is that we have an alternative. For the last few decades, free software/open source has been building a software ecosystem that is outside the control of the traditional computer industry. That makes it much harder for the NSA to subvert, since the code is developed openly, which allows anyone to inspect it and look for backdoors - secret ways to spy on and control the software.

That's not to say free software is completely immune to security issues. Many open source products come from companies, and it's possible that some of them may have been pressured to weaken aspects of their work. Free software applications might be subverted as they are converted from the source code, which can be easily checked for backdoors, to the binaries - the versions that actually run on a computer - which can't. There is also potential for online holdings of open source programs to be broken into and tampered with in subtle ways.

Despite those problems, open source is still the best hope we have when it comes to using strong encryption. But in the wake of Snowden's revelations, the free software community needs to take additional precautions so as to minimise the risk that code is still vulnerable to attacks and subversion by spy agencies.

Beyond such measures, the open source world should also start thinking about writing a new generation of applications with strong crypto built in. These already exist, but are often hard to use. More needs to be done to make them appropriate for general users: the latter may not care much about the possibility that the NSA or GCHQ is monitoring everything they do online, but if they are offered great tools that make it easy to resist such efforts, more people may adopt them, just as millions have switched to the Firefox browser - not because it supports open standards, but because it is better.

Although the scale of the spying revealed by Snowden's leaks is staggering, and the leaks about the thoroughgoing and intentional destruction of the Internet's entire trust and security systems are shocking, there is no reason for despair. Even in the face of widespread public ignorance and indifference to the threat such total surveillance represents to democracy, as far as we know we can still use strong encryption implemented in open source software to protect our privacy.

Indeed, this may be an opportunity for open source to be embraced by a wider public, since we now know definitively that commercial software cannot be trusted, and is effectively spyware that you have to pay for. And just as Moore's Law allows the NSA and GCHQ to pull in and analyse ever-more of our data, so free software, too, can benefit.

For as Moore's Law continues to drive down the prices of personal computing devices - whether PCs, smartphones or tablets - so more people in developing countries around the world are able to acquire them. Many will adopt free software, since Western software companies often price their products at unreasonably-high levels compared to local disposable income. As open source is used more widely, so the number of people keen and able to contribute to such projects will grow, the software will improve, and more people will use it. In other words, there is a virtuous circle that produces its own kind of scaling that will help to counteract the more malign kind that underlies the ever-expanding surveillance activities of the NSA and GCHQ. As well as tools of repression, computers can also be tools of resistance when powered by free software, which is called that for a reason.

Resisting Surveillance on a Unprecedented Scale II

(The first part of this three-part essay appeared yesterday.)

The gradual but relentless shift from piecemeal, small-scale analogue eavesdropping to constant and total surveillance may also help to explain the public's relative equanimity in the face of these revelations. Once we get beyond the facile idea that if you have nothing to hide, you have nothing to fear - everybody has something to hide, even if it is only the private moments in their lives - there is another common explanation that people offer as to why they are not particularly worried about the activities of the NSA and GCHQ. This is that "nobody would be interested" in what they are up to, and so they are confident that they have not been harmed by the storage and analysis of the Internet data.

This is based on a fundamentally analogue view of what is going on. These people are surely right that no spy is sitting at a keyboard reading their emails or Facebook posts. That's clearly not possible, even if the will were there. But it's not necessary, since the data can be "read" by tireless programs that extract key information at an accelerating pace and diminishing cost thanks to Moore's Law.

People are untroubled by this because most of them can't imagine what today's top computers can do with their data, and think again in analogue terms - the spy sifting slowly through so much information as to be swamped. And that's quite understandable, since even computer experts struggle to keep up with the pace of development, and to appreciate the ramifications.

A post on the Google Search blog from last year may help to provide some sense of just how powerful today's systems are:

When you enter a single query in the Google search box, or just speak it to your phone, you set in motion as much computing as it took to send Neil Armstrong and eleven other astronauts to the moon. Not just the actual flights, but all the computing done throughout the planning and execution of the 11-year, 17 mission Apollo program. That’s how much computing has advanced.

Now add in the fact that three billion Google queries are entered each day, and that the NSA's computing capability is probably vastly greater than Google's, and you have some idea of the raw power available for the analysis of the "trivial" data gathered about all of us, and how that might lead to very non-trivial knowledge about our most intimate lives.

In terms of how much information can be held, a former NSA technical director, William Binney, estimates that one NSA data centre currently being built in Utah will be able to handle and process five zettabytes of data - that's five million million gigabytes. If you were to print out that information as paper documents, and store them in traditional filing cabinets, it would require around 42 million million cabinets occupying 17 million square kilometres of floor space.

Neither computing power nor the vast holdings of personal data on their own are a direct threat to our privacy and freedom But putting them together means that the NSA can not only find anything in those 42 million million virtual cabinets more or less instantly, but that it can cross-reference any word on any piece of paper in any cabinet - something that can't even be contemplated as an option for human operators, let alone attempted.

It is this unprecedented ability to consolidate all the data about us, along with the data of our family, friends and acquaintances, and their family, friends and acquaintances (and sometimes even the acquaintances of our acquaintances' acquaintances) that creates the depth of knowledge the NSA has at its disposal whenever it wants it. And while it is unlikely to call up that knowledge for most of us, it only takes a tiny anomalous event somewhere deep in the chain of acquaintance for a suspicion to propagate back through the links to taint all our innocent records, and to cause them to be added to the huge pile of data that will cross-referenced and sifted and analysed in the search for significant patterns so deep that we are unlikely to be aware of them.

Given this understandable, if regrettable, incomprehension on the part of the public about the extraordinary power at the disposal of the NSA, and what it might be able to extract as a result, the key question then becomes: what can we do to bolster our privacy? Until a few weeks ago, most people working in this field would have said "encrypt everything". But the recent revelations that the NSA and GCHQ have succeeded in subverting just about every encryption system that is widely used online seem to destroy even that last hope.

(In tomorrow's instalment: the way forward.)

Resisting Surveillance on a Unprecedented Scale I

Netzpolitik.org is the leading site covering digital rights in German. It played a key role in helping to stop ACTA last year, and recently has been much occupied with the revelations about NSA spying, and its implications. As part of that, it has put together a book/ebook (in German) as a first attempt to explore the post-Snowden world we now inhabit. I've contributed a new essay, entitled "Resisting Surveillance on a Unprecedented Scale", which is my own attempt to sum up what happened, and to look forward to what our response should be. I'll be publishing it here, split up into three parts, over the next few days.


Despite being a journalist who has been writing about the Internet for 20 years, and a Briton who has lived under the unblinking eye of millions of CCTV cameras for nearly as long, I am nonetheless surprised by the revelations of Edward Snowden. I have always had a pretty cynical view of governments and their instruments of power such as the police and secret services; I have always tried to assume the worst when it comes to surveillance and the assaults on my privacy. But I never guessed that the US and UK governments, aided and abetted to varying degrees by other countries, could be conducting what amounts to total, global surveillance of the kind revealed by Snowden's leaked documents.

I don't think I'm alone in this. Even though some people are now claiming this level of surveillance was "obvious", and "well-known" within the industry, that's not my impression. Judging by the similarly shocked and outraged comments from many defenders of civil liberties and computer experts, particularly in the field of security, they, like me, never imagined that things were quite this bad. That raises an obvious question: how did it happen?

Related to that outrage in circles that concern themselves with these issues, is something else that needs explaining: the widespread lack of outrage among ordinary citizens. To be sure, some countries are better than others in understanding the implications of what has been revealed to us by Snowden (and some are worse - the UK in particular). But given the magnitude and thoroughgoing nature of the spying that is being conducted on our online activities, the response around the world has been curiously muted. We need to understand why, otherwise the task of rolling back at least some of the excesses will be rendered even more difficult.

The final question that urgently requires thought is what can, in fact, be done? Since the level of public concern is relatively low, even in those countries that are traditionally sensitive about privacy issues - Germany, for example - what are the alternatives to stricter government controls, which seem unlikely to be forthcoming?

Although there was a Utopian naivety in the mid-1990s about what the Internet might bring about, it has been clear for a while that the Internet has its dark side, and could be used to make people less, not more, free. This has prompted work to move from a completely open network, with information sent unencrypted, to one where Web connections using the HTTPS technology shield private information from prying eyes. It's remarkable that it has only been in recent years that the pressure to move to HTTPS by default has grown strong.

That's perhaps a hint of how the current situation of total surveillance has arisen. Although many people knew that unencrypted data could be intercepted, there was a general feeling that it wouldn't be possible to find the interesting streams amongst the huge and growing volume flooding every second of the day through the series of digital tubes that make up the Internet.

But that overlooked one crucial factor: Moore's Law, and its equivalents for storage and connectivity. Crudely stated, this asserts that the cost of a given computational capability will halve every 18 months or so. Put another way, for a given expenditure, the available computing power doubles every year and half. And it's important to remember that this is geometric growth: after ten years, Moore's Law predicts computing power increases by a factor of around 25 for a given cost.

Now add in the fact that the secret services are one of the least constrained when it comes to spending money on the latest and fastest equipment, since the argument can always be made that the extra power will be vitally important in getting information that could save lives and so on. One of the first and most extraordinary revelations conveyed from Snowden by the Guardian gave an insight into how that extra and constantly increasing computing power is being applied, in what was called the Tempora programme:

By the summer of 2011, GCHQ had probes attached to more than 200 internet links, each carrying data at 10 gigabits a second. "This is a massive amount of data!" as one internal slideshow put it. That summer, it brought NSA analysts into the Bude trials. In the autumn of 2011, it launched Tempora as a mainstream programme, shared with the Americans.

The intercept probes on the transatlantic cables gave GCHQ access to its special source exploitation. Tempora allowed the agency to set up internet buffers so it could not simply watch the data live but also store it - for three days in the case of content and 30 days for metadata.

As that indicates, two years ago the UK's GCHQ was pulling in data at the rate of 2 terabits a second: by now it is certain to be far higher than that. Thanks to massive storage capabilities, GCHQ could hold the complete Internet flow for three days, and its metadata for 30 days.

There is one very simple reason why GCHQ is doing this: because at some point it realised it could, not just practically, because of Moore's Law, but also legally. The UK legislation that oversees this activity - the Regulation of Investigatory Powers Act (RIPA) - was passed in 2000, and drawn up based on the experience of the late 1990s. It was meant to regulate one-off interception of individuals, and most of it is about carrying out surveillance of telephones and the postal system. In other words, it was designed for an analogue world. The scale of the digital surveillance now taking place is so far beyond what was possible ten years ago, that RIPA's framing of the law - never mind its powers - are obsolete, and GCHQ is essentially able to operate without either legal or technical constraints.

(In tomorrow's instalment: why isn't the public up in arms over this?)

26 January 2014

Interview: Linus Torvalds - "I don't read code any more"


(This was originally published in The H Open in November 2012.)

I was lucky enough to interview Linus quite early in the history of Linux – back in 1996, when he was still living in Helsinki (you can read the fruits of that meeting in this old Wired feature.) It was at an important moment for him, both personally – his first child was born at this time – and in terms of his career. He was about to join the chip design company Transmeta, a move that didn't really work out, but led to him relocating to America, where he remains today.

That makes his trips to Europe somewhat rare, and I took advantage of the fact that he was speaking at the recent LinuxCon Europe 2012 in Barcelona to interview him again, reviewing the key moments for the Linux kernel and its community since we last spoke.

Glyn Moody: Looking back over the last decade and half, what do you see as the key events in the development of the kernel?

Linus Torvalds: One big thing for me is all the scalability work that we did. We've gone from being OK on 2 or 4 CPUs to the point where basically you can throw 4000 [at it] – you won't scale perfectly, but most of the time it's not the kernel that's the bottleneck. If your workload is somewhat sane we actually scale really well. And that took a lot of effort.

SGI in particular worked a lot on scaling past a few hundred CPUs. Their initial patches could just not be merged. There was no way we could take the work they did and use it on a regular PC because they added all this infrastructure to work on thousands of CPUs. That was way too expensive to do when you had only a couple.

I was afraid for the longest time that we would have the high-performance kernel for the big machines, and the source code would be separate from the normal kernel. People worked a lot on just making sure that we had a clean code base where you can say at compile time that, hey, I want the kernel that works for 4000 CPUs, and it generates the code for that, and at the same time, if you say no, I want the kernel that works on 2 CPUs, the same source code compiles.

It was something that in retrospect is really important because it actually made the source code much better. All the effort that SGI and others spent on unifying the source code, actually a lot of it was clean-up – this doesn't work for a hundred CPUs, so we need to clean it up so that it works. And it actually made the kernel more maintainable. Now on the desktop 8 and 16 CPUs are almost common; it used to be that we had trouble scaling to an 8, now it's like child's play.

But there's been other things too. We spent years again at the other end, where the phone people were so power conscious that they had ugly hacks, especially on the ARM side, to try to save power. We spent years doing power management in general, doing the kind of same thing - instead of having these specialised power management hacks for ARM, and the few devices that cellphone people cared about, we tried to make it across the kernel. And that took like five years to get our power management working, because it's across the whole spectrum.

Quite often when you add one device, that doesn't impact any of the rest of the kernel, but power management was one of those things that impacts all the thousands of device drivers that we have. It impacts core functionality, like shutting down CPUs, it impacts schedulers, it impacts the VM, it impacts everything.

It not only affects everything, it has the potential to break everything which makes it very painful. We spent so much time just taking two steps forward, one step back because we made an improvement that was a clear improvement, but it broke machines. And so we had to take the one step back just to fix the machines that we broke.

Realistically, every single release, most of it is just driver work. Which is kind of boring in the sense there is nothing fundamentally interesting in a driver, it's just support for yet another chipset or something, and at the same time that's kind of the bread and butter of the kernel. More than half of the kernel is just drivers, and so all the big exciting smart things we do, in the end it pales when compared to all the work we just do to support new hardware.

Glyn Moody: What major architecture changes have there been to support new hardware?

Linus Torvalds: The USB stack has basically been re-written a couple of time just because some new use-case comes up and you realise that hey, the original USB stack just never took that into account, and it just doesn't work. So USB 3 needs new host controller support and it turns out it's different enough that you want to change the core stack so that it can work across different versions. And it's not just USB, it's PCI, and PCI becomes PCIe, and hotplug comes in.

That's another thing that's a huge difference between traditional Linux and traditional Unix. You have a [Unix] workstation and you boot it up, and it doesn't change afterwards - you don't add devices. Now people are taking adding a USB device for granted, but realistically that did not use to be the case. That whole being able to hotplug devices, we've had all these fundamental infrastructure changes that we've had to keep up with.

Glyn Moody: What about kernel community – how has that evolved?

Linus Torvalds: It used to be way flatter. I don't know when the change happened, but it used to be me and maybe 50 developers - it was not a deep hierarchy of people. These days, patches that reach me sometimes go through four levels of people. We do releases every three months; in every release we have like 1000 people involved. And 500 of the 1000 people basically send in a single line change for something really trivial – that's how some people work, and some of them never do anything else, and that's fine. But when you have a thousand people involved, especially when some of them are just these drive-by shooting people, you can't have me just taking patches from everybody individually. I wouldn't have time to interact with people.

Some people just specialise in drivers, they have other people who they know who specialise in that particular driver area, and they interact with the people who actually write the individual drivers or send patches. By the time I see the patch, it's gone through these layers, it's seldom four, but it's quite often two people in between.

Glyn Moody: So what impact does that have on your role?

Linus Torvalds: Well, the big thing is I don't read code any more. When a patch has already gone through two people, at that point, I can either look at the patch and say: no, all your work was wasted, and micromanage at that level – and quite frankly I don't want to do that, and I don't have the capacity to do that.

So most of the time, when it comes to the major subsystem maintainers, I trust them because I've been working with them for 5, 10, 15 years, so I don't even look at the code. They tell me these are the changes and they give me a very high-level overview. Depending on the person, it might be five lines of text saying this is roughly what has changed, and then they give me a diffstat, which just says 15 lines have changed in that file, and 25 lines have changed in that file and diffstat might be a few hundred lines because there's a few hundred files that have changed. But I don't even see the code itself, I just say: OK, the changes happen in these files, and by the way, I trust you to change those files, so that's fine. And then I just say: I'll take it.

Glyn Moody: So what's your role now?

Linus Torvalds: Largely I'm managing people. Not in the logistical sense – I obviously don't pay anybody, but I also don't have to worry about them having access to hardware and stuff like that. Largely what happens is I get involved when people start arguing and there's friction between people, or when bugs happen.

Bugs happen all the time, but quite often people don't know who to send the bug report to. So they will send the bug report to the Linux Kernel mailing list – nobody really is able to read it much. After people don't figure it out on the kernel mailing list, they often start bombarding me, saying: hey, this machine doesn't work for me any more. And since I didn't even read the code in the first place, but I know who is in charge, I end up being a connection point for bug reports and for the actual change requests. That's all I do, day in and day out, is I read email. And that's fine, I enjoy doing it, but it's very different from what I did.

Glyn Moody: So does that mean there might be scope for you to write another tool like Git, but for managing people, not code?

Linus Torvalds: I don't think we will. There might be some tooling, but realistically most of the things I do tend to be about human interaction. So we do have tools to figure out who's in charge. We do have tools to say: hey, we know the problem happens in this area of the code, so who touched that code last, and who's the maintainer of that subsystem, just because there are so many people involved that trying to keep track of them any other way than having some automation just doesn't work. But at the same time most of the work is interaction, and different people work in different ways, so having too much automation is actually painful for people.

We're doing really well. The kind of pain points we had ten years ago just don't exist any more. And that's largely because we used to be this flat hierarchy, and we just fixed our tools, we fixed our work flows. And it's not just me, it's across the whole kernel there's no single person who's in the way of any particular workflow.

I get a fair amount of email, but I don't even get overwhelmed by email. I love reading email on my cellphone when I travel, for example. Even during breaks, I'll read email on my cellphone because 90% of them I can just read for my information that I can archive. I don't need to do anything, I was cc'd because there was some issue going on, I need to be aware of it, but I don't need to do anything about that. So I can do 90% of my work while travelling, even without having a computer. In the evening, when I go back to the hotel room, I'll go through [the other 10%].

Glyn Moody: 16 years ago, you said you were mostly driven by what the outside world was asking for; given the huge interest in mobiles and tablets, what has been their impact on kernel development?

Linus Torvalds: In the tablet space, the biggest issue tends to be power management, largely because they're bigger than phones. They have bigger batteries, but on the other hand people expect them to have longer battery life and they also have bigger displays, which use more battery. So on the kernel side, a tablet from the hardware perspective and a usage perspective is largely the same thing as a phone, and that's something we know how to do, largely because of Android.

The user interface side of a tablet ends up being where the pain points have been – but that's far enough removed from the kernel. On a phone, the browser is not a full browser - they used to have the mobile browsers; on the tablets, people really expect to have a full browser – you have to be able to click that small link thing. So most of the tablet issues have been in the user space. We did have a lot of issues in the kernel over the phones, but tablets kind of we got for free.

Glyn Moody: What about cloud computing: what impact has that had on the kernel?

Linus Torvalds: The biggest impact has been that even on the server side, but especially when it comes to cloud computing, people have become much more aware [of power consumption.] It used to be that all the power work originally happened for embedded people and cellphones, and just in the last three-four years it's the server people have become very power aware. Because they have lots of them together; quite often they have high peak usage. If you look at someone like Amazon, their peak usage is orders of magnitude higher than their regular idle usage. For example, just the selling side of Amazon, late November, December, the one month before Christmas, they do as much business as they do the rest of the year. The point is they have to scale all their hardware infrastructure for the peak usage that most of the rest of the year they only use a tenth of that capacity. So being able to not use power all the time [is important] because it turns out electricity is a big cost of these big server providers.

Glyn Moody: Do Amazon people get involved directly with kernel work?

Linus Torvalds: Amazon is not the greatest example, Google is probably better because they actually have a lot of kernel engineers working for them. Most of the time the work gets done by Google themselves. I think Amazon has had a more standard components thing. Actually, they've changed the way they've built hardware - they now have their own hardware reference design. They used to buy hardware from HP and Dell, but it turns out that when you buy 10,000 machines at some point it's just easier to design the machines yourself, and to go directly to the original equipment manufacturers and say: I want this machine, like this. But they only started doing that fairly recently.

I don't know whether [Amazon] is behind the curve, or whether Google is just more technology oriented. Amazon has worked more on the user space, and they've used a fairly standard kernel. Google has worked more on the kernel side, they've done their own file systems. They used to do their own drivers for their hard discs because they had some special requirements.

Glyn Moody: How useful has Google's work on the kernel been for you?

Linus Torvalds: For a few years - this is five or ten years ago - Google used to be this black hole. They would hire kernel engineers and they would completely disappear from the face of the earth. They would work inside Google, and nobody would ever hear from them again, because they'd do this Google-specific stuff, and Google didn't really feed back much.

That has improved enormously, probably because Google stayed a long time on our previous 2.4 releases. They stayed on that for years, because they had done so many internal modifications for their specialised hardware for everything, that just upgrading their kernel was a big issue for them. And partly because of the whole Android project they actually wanted to be much more active upstream.

Now they're way more active, people don't disappear there any more. It turns out the kernel got better, to the point where a lot of their issues just became details instead of being huge gaping holes. They were like, OK, we can actually use the standard kernel and then we do these small tweaks on top instead of doing these big surgeries to just make it work on their infrastructure.

Glyn Moody: Finally, you say that you spend most of your time answering email: as someone who has always seemed a quintessential hacker, does that worry you?

Linus Torvalds: I wouldn't say that worries me. I end up not doing as much programming as sometimes I'd like. On the other hand, it's like some kinds of programming I don't want to do any more. When I was twenty I liked doing device drivers. If I never have to do a single device driver in my life again, I will be happy. Some kind of headaches I can do without.

I really enjoyed doing Git, it was so much fun. When I started the whole design, started doing programming in user space, which I had not done for 15 years, it was like, wow, this is so easy. I don't need to worry about all these things, I have infinite stack, malloc just works. But in the kernel space, you have to worry about locking, you have to worry about security, you have to worry about the hardware. Doing Git, that was such a relief. But it got boring.

The other project I still am involved in is the dive computer thing. We had a break in on the kernel.org site. It was really painful for the maintainers, and the FBI got involved just figuring out what the hell happened. For two months we had almost no kernel development – well, people were still doing kernel development, but the main site where everybody got together was down, and a lot of the core kernel developers spent a lot of time checking that nobody had actually broken into their machines. People got a bit paranoid.

So for a couple of months my main job, which was to integrate work from other people, basically went away, because our main integration site went away. And I did my divelog software, because I got bored, and that was fun. So I still do end up doing programming, but I always come back to the kernel in the end.

23 November 2013

Will CyanogenMod Get the Business Blues?

Last week, I wrote an article pointing out that the NSA's assault on cryptography, bad as it was, had a silver lining for open source, which was less vulnerable to being subverted than closed-source applications produced by companies. However, that raises the question: what about the mobile world? 

On Open Enterprise blog.

UK Gov: Smaller, Better, Faster, Stronger...Opener.

One of the recurrent themes on this blog has been the UK government's use - or failure to use - open source and open data. To be fair, on the open data side, things are going pretty well. Open source was previously conspicuous by its absence, and that is finally changing, albeit rather slower than many of us would wish.

On Open Enterprise blog.

How Network Neutrality Promotes Innovation

As I've pointed out many times in previous posts, one of the key benefits of mandating network neutrality is that it promotes innovation by creating a level playing field. Such statements are all very well, but where's the evidence? An important new study entitled "The innovation-enhancing effects of network neutrality" [.pdf], commissioned by the Dutch Ministry of Economic Affairs from the independent SEO Economic Research unit provides perhaps the best survey and analysis of why indeed network neutrality is so beneficial:

On Open Enterprise blog.

A New Chapter for Open Source?

Back in April, I wrote about in interesting new venture from the Linux Foundation called the OpenDaylight Project. As I pointed out then, what made this significant was that it showed how the Linux Foundation was beginning to move beyond its historical origins of supporting the Linux ecosystem, towards the broader application of the important lessons it has learnt about open source collaboration in the process. Following that step, we now have this:

On Open Enterprise blog.

Open Source in the UK: Sharing the Fire

As even a cursory glance at articles on Open Enterprise over the last few years will indicate, open source is a massive success in practically every market. Except, unfortunately, on the desktop (famously) and more, generally, for consumers. And as Aral Balkan points out in an important post from a few weeks ago, that's a real problem:

On Open Enterprise blog.

Is Apache the Most Important Open Source Project?

Back in the mists of time - I'm talking about 2000 here - when free software was still viewed by many as a rather exotic idea, I published a book detailing its history up to that point. Naturally, I wrote about Apache (the Web server, not the foundation) there, since even in those early days it was already the sectoral leader. As I pointed out:

On Open Enterprise blog.

Is This Finally the Year of Open Source...in China?

One of the long-running jokes in the free software world is that this year will finally be the year of open source on the desktop - just like it was last year, and the year before that. Thanks to the astounding rise of Android, people now realise that the desktop is last decade's platform, and that mobile - smartphones and tablets - are the future. But I'd argue that there is something even more important these, and that is the widespread deployment of open source in China.

On Open Enterprise blog.

27 October 2013

Could Open Source Make GMOs More Palatable?

As a recent DailyDirt noted, opinions on the safety of genetically modified organisms (GMOs) are sharply divided. But that heated argument tends to obscure another problem that Techdirt has often written about in other fields: the use of patent monopolies to exert control, in this case over the food chain. By inserting DNA sequences into plants and animals and obtaining patents, the biotech industry is granted surprisingly wide-ranging powers over how its products are used, as the Bowman case made clear. That's potentially problematic when those products are the foods that keep us alive. 

On Techdirt.

19 September 2013

Another Reason Why Open Source Wins: Fairness

I've written a number of posts looking at less-familiar advantages of open source over closed source, and here's another one. Proprietary systems can't be forked, which means that it's not possible to change the underlying ethos, for example by tweaking the software or using code on a different platform. But you can with open source, as this interesting example shows.

On Open Enterprise blog.

18 September 2013

Why We Need Open Source: Three Cautionary Tales

Open Enterprise mostly writes about "obvious" applications of open source - situations where money can be saved, or control regained, by shifting from proprietary to open code. That battle is more or less won: free software is widely recognised as inherently superior in practically all situations, as its rapid uptake across many markets demonstrates. But there are also some circumstances where it may not be so obvious that open source is the solution, because it's not always clear what the problem is.

On Open Enterprise blog.

Happy 10th Anniversary, Groklaw

One of the amazing things about free software is how it has managed to succeed against all the odds - and against the combined might of some of the world's biggest and most wealthy companies. That shows two things, I think: the power of a simple idea like open collaboration, and how individuals, weak on their own, collectively can achieve miracles.

On Open Enterprise blog.