Showing posts with label alan cox. Show all posts
Showing posts with label alan cox. Show all posts

17 June 2009

Open Source in the Enterprise: Safely Boring

Yesterday I popped into part of the London Open Source Forum. This was a laudable effort organised by Red Hat in conjunction with some of its partners to corrupt young and innocent minds – well, senior managers, at least – and convince them about the immanent wonderfulness of open source. To that end, they wheeled out some of the big names in the enterprise free software world like Matt Asay, Simon Phipps and Jan Wildeboer....

On Open Enterprise blog.

09 May 2009

Should Software Developers Be Liable for their Code?

Should Microsoft pay for the billions of dollars of damage that flaws in its software have caused around the world? It might have to, if a new European Commission consumer protection proposal becomes law. Although that sounds an appealing prospect, one knock-on consequence could be that open source coders would also be liable for any damage that errors in their software caused....

On Linux Journal.

24 December 2008

Alan Cox and the End of an Era

In the beginning, free software was an activity conducted on the margins - using spare time on a university's computers, or the result of lonely bedroom hacking. One of the key moments in the evolution of free software was when hackers began to get jobs - often quite remunerative jobs - with one of the new open source companies that sprang up in the late 1990s. For more or less the first time, coders could make a good salary doing what they loved, and businesses could be successful paying them to write code that would be given away.

On Open Enterprise blog.

13 September 2007

Westminster eForum: Sermon of the Day

No posting yesterday, since I was up at the Westminster eForum talking about open source (now, there's a surprise), along with a few core open-type people like Mark Taylor, Alan Cox and Becky Hogge. However, sadly few Westminster-type people were there whose ear could be bent; mostly it was just preaching to the choir. Here's my sermon:

I have had the privilege of writing about free software and open source for over 12 years now. I say privilege for at least two reasons.

First, the people I have met and interviewed in this world have been pretty extraordinary - and certainly very different in many respects from those I have encountered elsewhere in computing. In particular, they are driven by something that can only be called a passion for writing great programs, and a deeply-held belief that these should be made available as widely as possible.

The second reason my time covering this area has been such a privilege is that the ideas underpinning open source have turned out to be deep and far-reaching. This wasn't really clear a decade ago - certainly not to me - when the idea of writing software collaboratively across the Internet, and then giving it away, was so radical that many people thought it would either fizzle out completely, or remain a kind weird, beard-and-sandles niche.

But today, open source has entered the mainstream: most of the Internet runs on free software; companies like Google depend on it, and more and more governments are deploying it - well, outside the UK, at least. And as open source has become almost commonplace in certain sectors of computing, it has also become clear that this is not just about software. It is about a profound shift that is beginning to make its presence felt elsewhere.

For example, most people know of Wikipedia, which is created collaboratively across the Internet, and made freely available to all - in other words, an open source encyclopaedia. The fact that there are now over two million entries - and that's just the English-language version - shows just what that approach can achieve outside software.

Most people have heard of the Human Genome Project, but not many realise that the reason it succeeded - and prevented US companies from patenting huge swathes of our DNA - was that it was conducted collaboratively, across the Internet, and that its results were placed in the public domain immediately, as a matter of policy. In other words, it applied the open source methodology to genomics.

Less well-known, perhaps, is open access. Here the idea is that the scientific and academic research funded by the taxpayer should be freely available online for anyone to read, and for other scientists to use and to build on. Not an unreasonable wish, you would have thought, and yet one that is being fought fiercely by certain large - and highly-profitable - scientific publishers. The similarity of the idea to software collaboration is evident, and indeed the open access pioneers were directly inspired by open source.

There are other examples, but my allotted time is running out, and we can perhaps explore this area in the question and answer session - or indeed anytime afterwards (you can Google me for contact details). The main point I'd like to leave with you is this: that open source is not about computers, it's about people. It's about how we create, how we share, and how we live and work together in the age of the Internet.

So, far from being some minor technical issue, of interest only to a few anoraks, open source and the larger ideas behind it are, in fact, absolutely central to the way society, democracy and government will function in the 21st century. What we are discussing today is just the beginning.

15 August 2007

O'Reilly? I Think Not

Once again, Matt gets it, and Tim doesn't:

"I will predict that virtually every open source company (including Red Hat) will eventually be acquired by a big proprietary software company."

Thus spake Tim O'Reilly in the comments to one of his other posts. Tim believes that open source, at least as defined by open-source licensing, has a short shelf-life that will be consumed by Web 2.0 (i.e., web companies hijacking open-source software to deliver proprietary web services) or by traditional proprietary software vendors.

In other words, why don't I just give up, sell out, and go home? I guess I would if I thought that Tim were right. He's not, not in this instance.

There's something more fundamental going on here than "Proprietary software meets open source. Proprietary software decides to commandeer open source. Open source proves to be a nice lapdog to proprietary software." I actually believe that open source, not proprietary software, is the natural state of the industry, and that Tim's proprietary world is anomalous.

I particularly liked this distinction between the service aspects of software, and the attempts to view it as an instantiation of various intellectual monopolies:

Suddenly, the license matters more, not less, because it is the license that ensures the conversation focuses on the right topic - service - rather than on inane jabberings that only vendors care about. You know, like intellectual property.

And there's another crucial reason why proprietary software companies can't just open their chequebooks and acquire those pesky open source upstarts. Unlike companies who seem to think that they are co-extensive with the intellectual monopolies they foist on customers, open source outfits know they are defined by the high-quality people - both employees and those out in the community - that code for the customers.

For example, one reason people take out subscriptions to Red Hat's offerings is that they get to stand in line for the use of Alan Cox's brain. Imagine, now, that proprietary company X "buys" Red Hat: well, what exactly does it buy? Certainly not Alan Cox's brain, which will leave with him (one hopes) when he moves immediately to another open source company (or just hacks away in Wales for pleasure). Sure, the purchaser will have all kinds of impressive legal documents spelling out what it "owns" - but precious little to offer customers anymore, who are likely to follow wherever Alan Cox and his ilk go.

10 August 2007

The Liability of Closed Source Software

It's a pity that reports from the House of Lord's Science and Technology Committee are so long, because they contain buckets of good stuff - not least because they draw on top experts. A case in point is the most recent, looking at personal Internet security, which includes luminaries such as Bruce Schneier and Alan Cox.

The recommendations are a bit of a mixed bag, but one thing that caught my eye was in the context of making suppliers liable for their software. As Bruce puts it:

“We are paying, as individuals, as corporations, for bad security of products”—by which payment he meant not only the cost of losing data, but the costs of additional security products such as firewalls, anti-virus software and so on, which have to be purchased because of the likely insecurity of the original product. For the vendors, he said, software insecurity was an “externality … the cost is borne by us users.” Only if liability were to be placed upon vendors would they have “a bigger impetus to fix their products”

Of course, product liability might be a bit problemtatic for free software, but again Schneier has a solution:

Any imposition of liability upon vendors would also have to take account of the diversity of the market for software, in particular of the importance of the open source community. As open source software is both supplied free to customers, and can be analysed and tested for flaws by the entire IT community, it is both difficult and, arguably, inappropriate, to establish contractual obligations or to identify a single “vendor”. Bruce Schneier drew an analogy with “Good Samaritan” laws, which, in the United States and Canada, protect those attempting to help people who are sick or injured from possible litigation. On the other hand, he saw no reason why companies which took open source software, aggregated it and sold it along with support packages—he gave the example of Red Hat, which markets a version of the open source Linux operating system—should not be liable like other vendors.

19 January 2007

Alan Cox Stands up for Closed Source

These aren't words you'd expect to issue from the mouth of one of the most senior Linux hackers:

Cox said that closed-source companies could not be held liable for their code because of the effect this would have on third-party vendor relationships: "[Code] should not be the [legal] responsibility of software vendors, because this would lead to a combinatorial explosion with third-party vendors. When you add third-party applications, the software interaction becomes complex. Rational behaviour for software vendors would be to forbid the installation of any third-party software." This would not be feasible, as forbidding the installation of third-party software would contravene anti-competition legislation, he noted.

But, of course, he's absolutely right - which emphasises how lucky we are to have someone as sane as Alan representing the free software community when too many self-styled supporters present quite a different image.

23 October 2006

GPLv3: What Linus, Alan, Greg, Andrew and Dave Said

Few subjects in the world of free software have provoked as much discussion as the new GNU GPLv3 licence. Mostly it's outsiders (like me) sounding off about this, but what do the people involved really think?

I decided to ask them, and write up the result for the Guardian. I was expecting a couple of lines back to my emailed questions if I was lucky, but badly underestimated hacker generosity. Linus and his mates sent me back long and typically thoughtful replies, while RMS wanted to talk about it at length.

Since I was only able to use a tiny fraction of this material in the Guardian article that resulted, I thought it might be a useful contribution to the GPLv3 debate to post it here (with the permission of those concerned).

For length reasons, I've split it up into two postings. Below are the replies of the kernel coders, which I received on 3 October, 2006 (placed in the order in which their authors appear in the recent GPLv3 poll). The interview with RMS can be found above.

Linus Torvalds

I don't think there will necessarily be a lot of _practical_ fallout from it, so in that sense it probably doesn't matter all that much. It's not like we haven't had license "discussions" before (the whole BSD vs GPL flame-war seemed to go on for years back in the early nineties). And in many ways, it's not like the actual split between the "Open Source" and the "Free Software" mentality is in any way new, or even brought about by the GPLv3 license.

So while I think there is still a (admittedly pretty remote) chance of some kind of agreement, I don't think that it's a disaster if we end up with a GPLv2 and a new and incompatible GPLv3. It's not like we haven't had licenses before either, and most of them haven't been compatible.

In some ways, I can even hope that it clears the air for all the stupid tensions to just admit that there are differences of opinion, and that the FSF might even just stop using the name "GNU/Linux", finally admitting that Linux never was a GNU project in the first place.

The real downside, I suspect, is just the confusion by yet another incompatible license - and one that shares the same name (licenses such as OSL and GPL were both open source licenses and they were incompatible with each other, but at least they had clear differentiation in their names).

And there's bound to be some productivity loss from all the inevitable arguments, although in all honesty, it's not like open source developers don't spend a lot of time arguing _anyway_, so maybe that won't be all that big of a factor - just a shift of area rather than any actual new lost time ;)

One of the reasons the thing gets so heated is that people (very much me included) feel very strongly about their licenses. It's more than just a legal paper, it's deeply associated with what people have been working on for in some cases decades. So logically I don't think the disagreement really matters a whole lot, but a lot of it is about being very personally attached to some license choice.

Alan Cox

Ar Maw, 2006-10-03 am 13:57 +0100, ysgrifennodd glyn moody:
> Since it seems likely that the kernel will remain under v2, while the
> rest of GNU goes for v3, I was wondering whether you think this is
> going to cause you and others practical problems in your work on the
> kernel. What about companies and end-users of GNU/Linux: will there


There is no such thing as GNU/Linux. For an article like this it's really important to understand and clarify that (and from the US view also as a trademark matter).

I mean there is no abstract entity even that is properly called "GNU/Linux". It's a bit of spin-doctoring by the FSF to try and link themselves to Linux. Normally its just one of those things they do and people sigh about, but when you look at the licensing debate the distinction is vital. (its also increasingly true that FSF owned code is a minority part of Linux)

Linux is not and never has been an FSF project. I would say the majority of the kernel developers don't buy the FSF political agenda. Linus likewise chose the license for the pragmatic reason it was a good license for the OS, not because he supported the GNU manifesto.

Thus this isn't about the Linux people splitting from the FSF, its a separate project that happens to have been consulted as to whether it would like to use a new, allegedly better, variant of the license it chose.

Linux does use FSF tools but that doesn't make it a GNU project any more than this article will be an IBM project because it was typed on a PC, or a BT project because it used an ADSL line.

The Linux kernel being GPLv2 isn't a problem we can see for the future. It is a distinct work to the applications that run on it, just as Windows kernel is to Windows applications. The more awkward corner cases will be LGPL and similar licenses where you want the benefits and flexibility. The FSF have indicated they understand that and will ensure it works out. The licenses are about having barriers to abuse, not barriers to use.

> be negative consequences for them, or do you think that life will
> just go on as before?


I'm not sure what will happen with the rest of the GPL licensed software world. It really is too early to say because the license is a draft at this point and various areas around patents and optional clauses remain open to correction and improvement.

Most GPL licensed code is not controlled by the FSF and probably has too many contributers to relicense. Stuff that is new or has a few owners might change license if the new license is good. However given that most of the work on the FSF owned projects is done by non FSF people then if the license is bad I imagine all the developers will continue the GPLv2 branch and the FSF will be left out in the cold. The FSF know this too and that's why it takes time to build a new license and consensus.

It may well be the new license is mostly used with new code.

> What's the main problem you have with GPLv3?

For the the kernel there are a few, the big one that is hard to fix is the DRM clause. Right now the GPLv2 covers things like DRM keys in generic language and it means the law can interpret that sanely. Its vague but flexible, which lawyers don't like of course. There isn't any caselaw but out of court settlements support the fact this is enforcable.

The GPLv3 variant is much stronger and it appears to cover things like keys to rented devices where the DRM logic is less clear.

The big one though for the kernel is not a legal matter or even a specifically GPLv3 matter. Many people contributed to the kernel under a set of understood terms. Not only would all those people have to agree to a change in those terms but those terms changing would prevent some of the existing users from continuing to use it in the manner they do now.

You can't really make an agreement like that and then change the rules on people who've contributed time, money and code to the Linux project. I support Linus' assertion that legal issues aside he doesn't have the moral right to change the rules this way.

Greg Kroah-Hartman

> My question concerns the timing of the recent white paper: why was it
> released now, and not at the beginning of the GPLv3 consultation process
> when it might have been able to influence things?


The process is not over, and we still hope to influence things. We would not have written that letter otherwise. The main reason it was not done earlier is that we just did not think it was going to be a problem, as the kernel was not going to change licenses. But the more that we realized this was going to have a problem outside of just the kernel, and affect the whole community, we felt that we should at least voice our opinions.

Also, please note that the DRM issues have changed over time from being very broad (which was at least admirable), to being explicitly targeted at only the Linux kernel. Now the license is worded to try to stop the "tivoization" issue.

This is the where a bootloader or bios determines if the crypto signature of the kernel is acceptable or not before it decides to run it or not. This means that only "approved" kernels that come from the company will run properly on the hardware.

Now this kind of restriction pretty much _only_ affects the kernel, not any other type of program. This is because only if you can control the kernel can you ensure that the system is "secure".

So it seems that the FSF is only targeting the Tivo issue, which us kernel developers have explicitly stated in public that it is acceptable to use _our_ code in this manner. So they are now trying to tell another group (us) what we should do to our code.

As the FSF has no contribution in the Linux kernel, and has nothing to do with it in general, we kernel developers are now a bit upset that someone else is trying to tell us that something we explicitly stated was acceptable use of our code, is suddenly bad and wrong.

> Given that the FSF is unlikely to throw away all the work it has done, or
> even modify it substantially/substantively, do you have any thoughts on
> what's going to happen?


I really have no idea, but I would hope that things change for the better. We are already hearing rumors from the people on the different GPLv3 committees that our statement has had an affect, but we will not know for sure until the next draft comes out.

Andrew Morton

Well gee. We're programmers and we spend our time programming, not swanning around at meetings talking about legal matters and playing politics. We find things like licensing to be rather a distraction, and dull. So most people largely ignored it all.

It was only later in the process when the thing started to take shape, when we saw where it was headed and when we began to hear the concerns of various affected parties that there was sufficient motivation to get involved.

In fact this points at a broad problem with the existing process: I'm sure that a large majority of the people who actually write this code haven't made their opinions felt to the FSF. Yet the FSF presumes to speak for them, and proposes to use their work as ammunition in the FSF's campaigns.

And why haven't these programmers made their opinions known? Some are busy. Many work for overlawyered companies and are afraid that they might be seen to be speaking for their companies. Some don't speak English very well. Almost all of them find it to be rather dull and a distraction.

Dave Miller

For the kernel I'm pretty sure things will go on as they have before.

The problems are most likely for the projects under the GNU Project umbrella. All the copyrights to those projects, such as GCC, Binutils, etc. are all assigned to the GNU Project. So the FSF could, and almost certainly will, make all of those projects use the GPL v3.

As an aside, I will note that originally the FSF used to say that they wanted copyright assigned to them "to make it easier to enforce the GPL in court for software projects under the GNU Project umbrella." But as is clear today, it's also a power thing, in that having all the copyrights assigned to them allows the FSF to choose the licensing of the code as they see fit since they are the copyright holder of the complete work.

At the point of a relicense to GPL v3 for these GNU Project source trees one of two things could happen. Either the developers are OK with this, even if to simply "grin and bear it" and things go on under GPL v3. Or, the developers are unhappy with this, and fork off a GPL v2 copy of the tree and do development there.

In the end, even though they've assigned their copyrights to the FSF, the developers do control the ultimate licensing of these GNU projects. If they don't like GPL v3 and work on the GPL v2 fork instead, the FSF is at a loss because while they can mandate whatever they like such mandates are useless if the developers don't want to contribute to the GPL v3 variant.

So being the ones who do the development work is actually a kind of power which permeates through all of the politics. If the political folks do something stupid, the developers can just take their talent and efforts elsewhere.

I'm more than familiar with this process, since I was part of the group that forked the GCC compiler project many years ago because the majority of the GCC developers found the head maintainer (Richard Kenner) impossible to work with. Although he was quite upset about it, there wasn't much that Richard Stallman and the FSF could do about it. In the end the fork became the "real GCC" under GNU Project umbrella once more.

So the opinion of the developers matters a lot, especially when it comes to licensing. It could get messy is a lot of these projects fork, but the GPL v3 isn't a done deal yet so the FSF still has time to fix things up and make it more palatable to people.

> Alone among those polled for their views on the v2 and v3 you choose
>
> 0 I don't really care at all
>
> why is this when everybody else seems to hold such extreme views on the
> subject? Do you think they're getting worked up over nothing?


First, I think the poll was pretty useless.

The poll asked what people think of the GPL v3 draft, which is by definition in draft state and therefore not ready for final consumption. Of course the GPL v3 still needs some fixing. So asking about actually using it in a major software project right now is totally pointless. What would have been more interesting would have been to ask what the developers think about the "core issues" rather than the specific implementation of those issues in the current GPL v3 draft.

For example, polling on what the kernel developers thought about "keying of the Linux kernel" in the way that Tivo does would have been much more interesting. In my opinion, I believe you would have seen about an even split down the middle on this one. But even the people who are against keying think that the DRM language in the GPL v3 meant to combat this is not done correctly.

Several kernel developers believe that GPL v2 already has enough language to make restrictions such as keying be not allowed.

Personally, I'm against keying and I'm quite unhappy with what Tivo did with the Linux kernel. This kind of keying sets a very bad precedent. For example, in the future vendors could get away with some questionable things using keying. Say a vendor sells a piece of hardware, and provides the GPL source to the kernel drivers of the new things in that piece of hardware. Then, they only allow kernel binaries signed with a special key to load. This makes the publishing of their drivers effectively useless. The vendor still controls everything and nobody gains from the code they've submitted. Nobody can recompile a kernel with their changes and actually test it on their hardware, since they have no way to produce a signed kernel that the device will actually allow to boot. So the value of this "contribution" is absolutely zero. This is, in my opinion, totally against the spirit of the GPL v2.

I would have been perfectly fine with Tivo using another OS for their product. All the world does not have to be Linux, and if Linux's license doesn't suit someone, they are free to not use it.

In most examples I've ever been shown where this kind of lockdown is supposedly "legitimate", the owner of the device is effectively making this lockdown decision. For example I'm OK with electronic voting machines used in real elections having their software locked down. The government owns those machines, and is well within their rights to lock down that hardware in order to ensure a proper and fair election to the public by preventing software tampering.

But if I purchase an electronic voting machine of my own, and it uses the Linux kernel, I very much want to tinker with it, build my own kernels, and try to poke holes in the device. How else could we validate that electronic voting machines are safe if the public has no way to test these claims out for themselves, in particular when such devices use open technologies such as the Linux kernel?