Showing posts sorted by date for query stack. Sort by relevance Show all posts
Showing posts sorted by date for query stack. Sort by relevance Show all posts

02 February 2014

Interview: Eben Moglen - "surveillance becomes the hidden service wrapped inside everything"

(This was original published in The H Open in March 2010.)

Free software has won: practically all of the biggest and most exciting Web companies like Google, Facebook and Twitter run on it.  But it is also in danger of losing, because those same services now represent a huge threat to our freedom as a result of the vast stores of information they hold about us, and the in-depth surveillance that implies.

Better than almost anyone, Eben Moglen knows what's at stake.  He was General Counsel of the Free Software Foundation for 13 years, and helped draft several versions of the GNU GPL.  As well as being Professor of Law at Columbia Law School, he is the Founding Director of Software Freedom Law Center.  And he has an ambitious plan to save us from those seductive but freedom-threatening Web service companies.  He explained what the problem is, and how we can fix it.

GM: So what's the threat you are trying to deal with?

EM:  We have a kind of social dilemma which comes from architectural creep.  We had an Internet that was designed around the notion of peerage -  machines with no hierarchical relationship to one another, and no guarantee about their internal architectures or behaviours, communicating through a series of rules which allowed disparate, heterogeneous networks to be networked together around the assumption that everybody's equal. 

In the Web the social harm done by the client-server model arises from the fact that logs of Web servers become the trails left by all of the activities of human beings, and the logs can be centralised in servers under hierarchical control.  Web logs become power.  With the exception of search, which is a service that nobody knows how to decentralise efficiently, most of these services do not actually rely upon a hierarchical model.  They really rely upon the Web  - that is, the non-hierachical peerage model created by Tim Berners-Lee, and which is now the dominant data structure in our world.

The services are centralised for commercial purposes.  The power that the Web log holds is monetisable, because it provides a form of surveillance which is attractive to both commercial and governmental social control.  So the Web with services equipped in a basically client-server architecture becomes a device for surveilling as well as providing additional services.  And surveillance becomes the hidden service wrapped inside everything we get for free.

The cloud is a vernacular name which we give to a significant improvement in the server-side of the web side - the server, decentralised.  It becomes instead of a lump of iron a digital appliance which can be running anywhere.  This means that for all practical purposes servers cease to be subject to significant legal control.  They no longer operate in a policy-directed manner, because they are no longer iron subject to territorial orientation of law. In a world of virtualised service provision, the server which provides the service, and therefore the log which is the result of the hidden service of surveillance, can be projected into any domain at any moment and can be stripped of any legal obligation pretty much equally freely.

This is a pessimal result.

GM:  Was perhaps another major factor in this the commercialisation of the Internet, which saw power being vested in a company that provided services to the consumer?

EM:  That's exactly right.  Capitalism also has its architectural Bauplan, which it is reluctant to abandon.  In fact, much of what the network is doing to capitalism is forcing it to reconsider its Bauplan via a social process which we call by the crappy name of disintermediation.  Which is really a description of the Net forcing capitalism to change the way it takes.  But there's lots of resistance to that, and what's interesting to all of us I suspect, as we watch the rise of Google to pre-eminence, is the ways in which Google does and does not - and it both does and does not - wind up behaving rather like Microsoft in the course of growing up.  There are sort of gravitational propositions that arise when you're the largest organism in an ecosystem. 

GM:  Do you think free software has been a little slow to address the problems you describe?

EM:  Yes, I think that's correct.  I think it is conceptually difficult, and it is to a large degree difficult because we are having generational change.  After a talk [I gave recently], a young woman came up to me and she said: I'm 23 years old, and none of my friends care about privacy.  And that's another important thing, right?, because we make software now using the brains and hands and energies of people who are growing up in a world which has been already affected by all of this.  Richard or I can sound rather old-fashioned.

GM:  So what's the solution you are proposing?

EM:  If we had a real intellectually-defensible taxonomy of services, we would recognise that a number of the services which are currently highly centralised, and which count for a lot of the surveillance built in to the society that we are moving towards, are services which do not require centralisation in order to be technologically deliverable.  They are really the Web repackaged. 

Social networking applications are the most crucial.  They rely in their basic metaphors of operation on a bilateral relationship called friendship, and its multilateral consequences.  And they are eminently modelled by the existing structures of the Web itself. Facebook is free Web hosting with some PHP doodads and APIs, and spying free inside all the time - not actually a deal we can't do better than. 

My proposal is this: if we could disaggregate the logs, while providing the people all of the same features, we would have a Pareto-superior outcome.  Everybody – well, except Mr Zuckenberg - would be better off, and nobody would be worse off.  And we can do that using existing stuff.

The most attractive hardware is the ultra-small, ARM-based, plug it into the wall, wall-wart server, the SheevaPlug.  An object can be sold to people at a very low one-time price, and brought home and plugged into an electrical outlet and plugged into a wall jack for the Ethernet, or whatever is there, and you're done.  It comes up, it gets configured through your Web browser on whatever machine you want to have in the apartment with it, and it goes and fetches all your social networking data from all the social networking applications, closing all your accounts.  It backs itself up in an encrypted way to your friends' plugs, so that everybody is secure in the way that would be best for them, by having their friends holding the secure version of their data.

And it begins to do all the things that we assume we need in a social networking appliance.  It's the feed, it maintains the wall your friends write on - it does everything that provides feature compatibility with what you're used to. 

But the log is in your apartment, and in my society at least, we still have some vestigial rules about getting into your house: if people want to check the logs they have to get a search warrant. In fact, in every society, a person's home is about as sacred as it gets.

And so, basically, what I am proposing is that we build a social networking stack based around the existing free software we have, which is pretty much the same existing free software the server-side social networking stacks are built on; and we provide ourselves with an appliance which contains a free distribution everybody can make as much of as they want, and cheap hardware of a type which is going to take over the world whether we do it or we don't, because it's so attractive a form factor and function, at the price. 

We take those two elements, we put them together, and we also provide some other things which are very good for the world.  Like automatically VPNing everybody's little home network place with my laptop wherever I am, which provides me with encrypted proxies so my web searching, wherever I am, is not going to be spied on.  It means that we have a zillion computers available to the people who live in China and other places where there's bad behaviour.  So we can massively increase the availability of free browsing to other people in the world.  If we want to offer people the option to run onion routeing, that's where we'll put it, so that there will be a credible possibility that people will actually be able to get decent performance on onion routeing networks.

And we will of course provide convenient encrypted email for people - including putting their email not in a Google box, but in their house, where it is encrypted, backed up to all their friends and other stuff.  Where in the long purpose of time we can begin to return email to a condition - if not being a private mode of communication - at least not being postcards to the secret police every day.

So we would also be striking a blow for electronic civil liberties in a way that is important, which is very difficult to conceive of doing in a non-technical way.

GM:  How will you organise and finance such a project, and who will undertake it?

EM:  Do we need money? Yeah, but tiny amounts.  Do we need organisation? Yes, but it could be self-organisation.  Am I going to talk about this at DEF CON this summer, at Columbia University? Yes.  Could Mr Shuttleworth do it if he wanted to? Yes.  It's not going to be done with clicking heels together, it's going to be done the way we do stuff: somebody's going begin by reeling off a Debian stack or Ubuntu stack or, for all I know, some other stack, and beginning to write some configuration code and some glue and a bunch of Python to hold it all together. From a quasi-capitalist point of view I don't think this is an unmarketable product.  In fact, this is the flagship product, and we ought to all put just a little pro bono time into it until it's done.

GM:  How are you going to overcome the massive network effects that make it hard to persuade people to swap to a new service?

EM:  This is why the continual determination to provide social networking interoperability is so important. 

For the moment, my guess is that while we go about this job, it's going to remain quite obscure for quite a while.  People will discover that they are being given social network portability.  [The social network companies] undermine their own network effect because everybody wants to get ahead of Mr Zuckerberg before his IPO.  And as they do that they will be helping us, because they will be making it easier and easier to do what our box has to do, which is to come online for you, and go and collect all your data and keep all your friends, and do everything that they should have done.

So part of how we're going to get people to use it and undermine the network effect, is that way.  Part of it is, it's cool; part of it is, there are people who want no spying inside; part of it is, there are people who want to do something about the Great Firewall of China but don't know how.  In other words, my guess is that it's going to move in niches just as some other things do.

GM:  With mobile taking off in developing countries, might it not be better to look at handsets to provide these services?

EM:  In the long run there are two places where we can conceivably put your identity: one is where you live, and the other is in your pocket.  And a stack that doesn't deal with both of those is probably not a fully adequate stack.

The thing I want to say directed to your point “why don't we put our identity server in our cellphone?”, is that our cellphones are very vulnerable.  In most parts of the world, you stop a guy on the street, you arrest him on a trumped-up charge of any kind, you get him back to the station house, you clone his phone, you hand it back to him, you've owned him.

When we fully commoditise that [mobile] technology, then we can begin to do the reverse of what the network operators are doing.  The network operators around the world are basically trying to eat the Internet, and excrete proprietary networking.  The network operators have to play the reverse if telephony technology becomes free.  We can eat proprietary networks and excrete the public Internet.  And if we do that then the power game begins to be more interesting.

26 January 2014

Interview: Linus Torvalds - "I don't read code any more"


(This was originally published in The H Open in November 2012.)

I was lucky enough to interview Linus quite early in the history of Linux – back in 1996, when he was still living in Helsinki (you can read the fruits of that meeting in this old Wired feature.) It was at an important moment for him, both personally – his first child was born at this time – and in terms of his career. He was about to join the chip design company Transmeta, a move that didn't really work out, but led to him relocating to America, where he remains today.

That makes his trips to Europe somewhat rare, and I took advantage of the fact that he was speaking at the recent LinuxCon Europe 2012 in Barcelona to interview him again, reviewing the key moments for the Linux kernel and its community since we last spoke.

Glyn Moody: Looking back over the last decade and half, what do you see as the key events in the development of the kernel?

Linus Torvalds: One big thing for me is all the scalability work that we did. We've gone from being OK on 2 or 4 CPUs to the point where basically you can throw 4000 [at it] – you won't scale perfectly, but most of the time it's not the kernel that's the bottleneck. If your workload is somewhat sane we actually scale really well. And that took a lot of effort.

SGI in particular worked a lot on scaling past a few hundred CPUs. Their initial patches could just not be merged. There was no way we could take the work they did and use it on a regular PC because they added all this infrastructure to work on thousands of CPUs. That was way too expensive to do when you had only a couple.

I was afraid for the longest time that we would have the high-performance kernel for the big machines, and the source code would be separate from the normal kernel. People worked a lot on just making sure that we had a clean code base where you can say at compile time that, hey, I want the kernel that works for 4000 CPUs, and it generates the code for that, and at the same time, if you say no, I want the kernel that works on 2 CPUs, the same source code compiles.

It was something that in retrospect is really important because it actually made the source code much better. All the effort that SGI and others spent on unifying the source code, actually a lot of it was clean-up – this doesn't work for a hundred CPUs, so we need to clean it up so that it works. And it actually made the kernel more maintainable. Now on the desktop 8 and 16 CPUs are almost common; it used to be that we had trouble scaling to an 8, now it's like child's play.

But there's been other things too. We spent years again at the other end, where the phone people were so power conscious that they had ugly hacks, especially on the ARM side, to try to save power. We spent years doing power management in general, doing the kind of same thing - instead of having these specialised power management hacks for ARM, and the few devices that cellphone people cared about, we tried to make it across the kernel. And that took like five years to get our power management working, because it's across the whole spectrum.

Quite often when you add one device, that doesn't impact any of the rest of the kernel, but power management was one of those things that impacts all the thousands of device drivers that we have. It impacts core functionality, like shutting down CPUs, it impacts schedulers, it impacts the VM, it impacts everything.

It not only affects everything, it has the potential to break everything which makes it very painful. We spent so much time just taking two steps forward, one step back because we made an improvement that was a clear improvement, but it broke machines. And so we had to take the one step back just to fix the machines that we broke.

Realistically, every single release, most of it is just driver work. Which is kind of boring in the sense there is nothing fundamentally interesting in a driver, it's just support for yet another chipset or something, and at the same time that's kind of the bread and butter of the kernel. More than half of the kernel is just drivers, and so all the big exciting smart things we do, in the end it pales when compared to all the work we just do to support new hardware.

Glyn Moody: What major architecture changes have there been to support new hardware?

Linus Torvalds: The USB stack has basically been re-written a couple of time just because some new use-case comes up and you realise that hey, the original USB stack just never took that into account, and it just doesn't work. So USB 3 needs new host controller support and it turns out it's different enough that you want to change the core stack so that it can work across different versions. And it's not just USB, it's PCI, and PCI becomes PCIe, and hotplug comes in.

That's another thing that's a huge difference between traditional Linux and traditional Unix. You have a [Unix] workstation and you boot it up, and it doesn't change afterwards - you don't add devices. Now people are taking adding a USB device for granted, but realistically that did not use to be the case. That whole being able to hotplug devices, we've had all these fundamental infrastructure changes that we've had to keep up with.

Glyn Moody: What about kernel community – how has that evolved?

Linus Torvalds: It used to be way flatter. I don't know when the change happened, but it used to be me and maybe 50 developers - it was not a deep hierarchy of people. These days, patches that reach me sometimes go through four levels of people. We do releases every three months; in every release we have like 1000 people involved. And 500 of the 1000 people basically send in a single line change for something really trivial – that's how some people work, and some of them never do anything else, and that's fine. But when you have a thousand people involved, especially when some of them are just these drive-by shooting people, you can't have me just taking patches from everybody individually. I wouldn't have time to interact with people.

Some people just specialise in drivers, they have other people who they know who specialise in that particular driver area, and they interact with the people who actually write the individual drivers or send patches. By the time I see the patch, it's gone through these layers, it's seldom four, but it's quite often two people in between.

Glyn Moody: So what impact does that have on your role?

Linus Torvalds: Well, the big thing is I don't read code any more. When a patch has already gone through two people, at that point, I can either look at the patch and say: no, all your work was wasted, and micromanage at that level – and quite frankly I don't want to do that, and I don't have the capacity to do that.

So most of the time, when it comes to the major subsystem maintainers, I trust them because I've been working with them for 5, 10, 15 years, so I don't even look at the code. They tell me these are the changes and they give me a very high-level overview. Depending on the person, it might be five lines of text saying this is roughly what has changed, and then they give me a diffstat, which just says 15 lines have changed in that file, and 25 lines have changed in that file and diffstat might be a few hundred lines because there's a few hundred files that have changed. But I don't even see the code itself, I just say: OK, the changes happen in these files, and by the way, I trust you to change those files, so that's fine. And then I just say: I'll take it.

Glyn Moody: So what's your role now?

Linus Torvalds: Largely I'm managing people. Not in the logistical sense – I obviously don't pay anybody, but I also don't have to worry about them having access to hardware and stuff like that. Largely what happens is I get involved when people start arguing and there's friction between people, or when bugs happen.

Bugs happen all the time, but quite often people don't know who to send the bug report to. So they will send the bug report to the Linux Kernel mailing list – nobody really is able to read it much. After people don't figure it out on the kernel mailing list, they often start bombarding me, saying: hey, this machine doesn't work for me any more. And since I didn't even read the code in the first place, but I know who is in charge, I end up being a connection point for bug reports and for the actual change requests. That's all I do, day in and day out, is I read email. And that's fine, I enjoy doing it, but it's very different from what I did.

Glyn Moody: So does that mean there might be scope for you to write another tool like Git, but for managing people, not code?

Linus Torvalds: I don't think we will. There might be some tooling, but realistically most of the things I do tend to be about human interaction. So we do have tools to figure out who's in charge. We do have tools to say: hey, we know the problem happens in this area of the code, so who touched that code last, and who's the maintainer of that subsystem, just because there are so many people involved that trying to keep track of them any other way than having some automation just doesn't work. But at the same time most of the work is interaction, and different people work in different ways, so having too much automation is actually painful for people.

We're doing really well. The kind of pain points we had ten years ago just don't exist any more. And that's largely because we used to be this flat hierarchy, and we just fixed our tools, we fixed our work flows. And it's not just me, it's across the whole kernel there's no single person who's in the way of any particular workflow.

I get a fair amount of email, but I don't even get overwhelmed by email. I love reading email on my cellphone when I travel, for example. Even during breaks, I'll read email on my cellphone because 90% of them I can just read for my information that I can archive. I don't need to do anything, I was cc'd because there was some issue going on, I need to be aware of it, but I don't need to do anything about that. So I can do 90% of my work while travelling, even without having a computer. In the evening, when I go back to the hotel room, I'll go through [the other 10%].

Glyn Moody: 16 years ago, you said you were mostly driven by what the outside world was asking for; given the huge interest in mobiles and tablets, what has been their impact on kernel development?

Linus Torvalds: In the tablet space, the biggest issue tends to be power management, largely because they're bigger than phones. They have bigger batteries, but on the other hand people expect them to have longer battery life and they also have bigger displays, which use more battery. So on the kernel side, a tablet from the hardware perspective and a usage perspective is largely the same thing as a phone, and that's something we know how to do, largely because of Android.

The user interface side of a tablet ends up being where the pain points have been – but that's far enough removed from the kernel. On a phone, the browser is not a full browser - they used to have the mobile browsers; on the tablets, people really expect to have a full browser – you have to be able to click that small link thing. So most of the tablet issues have been in the user space. We did have a lot of issues in the kernel over the phones, but tablets kind of we got for free.

Glyn Moody: What about cloud computing: what impact has that had on the kernel?

Linus Torvalds: The biggest impact has been that even on the server side, but especially when it comes to cloud computing, people have become much more aware [of power consumption.] It used to be that all the power work originally happened for embedded people and cellphones, and just in the last three-four years it's the server people have become very power aware. Because they have lots of them together; quite often they have high peak usage. If you look at someone like Amazon, their peak usage is orders of magnitude higher than their regular idle usage. For example, just the selling side of Amazon, late November, December, the one month before Christmas, they do as much business as they do the rest of the year. The point is they have to scale all their hardware infrastructure for the peak usage that most of the rest of the year they only use a tenth of that capacity. So being able to not use power all the time [is important] because it turns out electricity is a big cost of these big server providers.

Glyn Moody: Do Amazon people get involved directly with kernel work?

Linus Torvalds: Amazon is not the greatest example, Google is probably better because they actually have a lot of kernel engineers working for them. Most of the time the work gets done by Google themselves. I think Amazon has had a more standard components thing. Actually, they've changed the way they've built hardware - they now have their own hardware reference design. They used to buy hardware from HP and Dell, but it turns out that when you buy 10,000 machines at some point it's just easier to design the machines yourself, and to go directly to the original equipment manufacturers and say: I want this machine, like this. But they only started doing that fairly recently.

I don't know whether [Amazon] is behind the curve, or whether Google is just more technology oriented. Amazon has worked more on the user space, and they've used a fairly standard kernel. Google has worked more on the kernel side, they've done their own file systems. They used to do their own drivers for their hard discs because they had some special requirements.

Glyn Moody: How useful has Google's work on the kernel been for you?

Linus Torvalds: For a few years - this is five or ten years ago - Google used to be this black hole. They would hire kernel engineers and they would completely disappear from the face of the earth. They would work inside Google, and nobody would ever hear from them again, because they'd do this Google-specific stuff, and Google didn't really feed back much.

That has improved enormously, probably because Google stayed a long time on our previous 2.4 releases. They stayed on that for years, because they had done so many internal modifications for their specialised hardware for everything, that just upgrading their kernel was a big issue for them. And partly because of the whole Android project they actually wanted to be much more active upstream.

Now they're way more active, people don't disappear there any more. It turns out the kernel got better, to the point where a lot of their issues just became details instead of being huge gaping holes. They were like, OK, we can actually use the standard kernel and then we do these small tweaks on top instead of doing these big surgeries to just make it work on their infrastructure.

Glyn Moody: Finally, you say that you spend most of your time answering email: as someone who has always seemed a quintessential hacker, does that worry you?

Linus Torvalds: I wouldn't say that worries me. I end up not doing as much programming as sometimes I'd like. On the other hand, it's like some kinds of programming I don't want to do any more. When I was twenty I liked doing device drivers. If I never have to do a single device driver in my life again, I will be happy. Some kind of headaches I can do without.

I really enjoyed doing Git, it was so much fun. When I started the whole design, started doing programming in user space, which I had not done for 15 years, it was like, wow, this is so easy. I don't need to worry about all these things, I have infinite stack, malloc just works. But in the kernel space, you have to worry about locking, you have to worry about security, you have to worry about the hardware. Doing Git, that was such a relief. But it got boring.

The other project I still am involved in is the dive computer thing. We had a break in on the kernel.org site. It was really painful for the maintainers, and the FBI got involved just figuring out what the hell happened. For two months we had almost no kernel development – well, people were still doing kernel development, but the main site where everybody got together was down, and a lot of the core kernel developers spent a lot of time checking that nobody had actually broken into their machines. People got a bit paranoid.

So for a couple of months my main job, which was to integrate work from other people, basically went away, because our main integration site went away. And I did my divelog software, because I got bored, and that was fun. So I still do end up doing programming, but I always come back to the kernel in the end.

20 July 2013

The Future of Commercial Open Source: Foundations

Remember MySQL? Most famous for being part of the LAMP stack, and thus powering the bulk of the innovative work done in the field of ecommerce for a decade or more, it's rather fallen off the radar recently. It's not hard to see why: Oracle's acquisition of Sun, which had earlier bought MySQL for the not inconsiderable sum of $1 billion, meant that one of the key hacker projects was now run by the archetypal big-business mogul, Larry Ellison. So it was natural that people were unsure about MySQL's future, and started looking for alternatives.

On Open Enterprise blog.

10 March 2013

Python Trademark At Risk In Europe: Python Software Foundation Appeals For Help

The open source programming language Python -- named after the British comedy series "Monty Python" -- became popular in the 1990s, along with two other languages beginning with "P": Perl and PHP. Later, they formed a crucial part of the famous "LAMP" stack -- the GNU/Linux operating system + Apache Web server + MySQL database + Python/Perl/PHP as scripting languages -- that underpinned many of the most successful startups from this time. 

On Techdirt.

29 November 2010

Dissecting the Italian Non-Squirrel

A couple of days ago I wrote about the deal between the regional government of Puglia and Microsoft, noting that it was frustrating that we couldn't even see the terms of that deal. Well, now we can, in all its glorious officialese, and it rather confirms my worst fears.

Not, I hasten to add, because of the overall framing, which speaks of many worthy aims such as fighting social exclusion and improving the quality of life, and emphasises the importance of "technology neutrality" and "technological pluralism". It is because of how this deal will play out in practice.

That is, we need to read between the lines to find out what the fairly general statements in the agreement will actually mean. For example, when we read:

analisi congiunta delle discontinuità tecnologiche in atto e dello stato dell’arte in materia di ricerca e sviluppo informatico, sia in area desktop che nei data center (come ad es. il cloud computing e la mobilità);

[joint analysis of the technological discontinuities underway and of the state of the art in research materials and IT development, both on the desktop and in the data centre (for example, cloud computing and mobile)]

will Microsoft and representatives of the Puglia administration work together to discuss the latest developments in mobile, on the desktop, or data centres, and come to the conclusion: "you know, what would really be best for Puglia would be replacing all these expensive Microsoft Office systems by free LibreOffice; replacing handsets with low-cost Android smartphones; and adopting open stack solutions in the cloud"? Or might they just possibly decide: "let's just keep Microsoft Office on the desktop, buy a few thousands Windows Mobile 7 phones (they're so pretty!), and use Windows Azure, and Microsoft'll look after all the details"?

And when we read:

Favorire l’accesso e l’utilizzo del mondo scolastico e dei sistemi dell’istruzione alle tecnologie ed agli strumenti informatici più aggiornati

[To encourage the educational and teaching world to access and use the most up-to-date IT systems]

will this mean that teachers will explain how they need low-cost solutions that students can copy and take home so as not to disadvantage those unable to pay hundreds of Euros for desktop software, and also software that can be modified, ideally by the students themselves? And will they then realise that the only option that lets them do that is free software, which can be copied freely and examined and modified?

Or will Microsoft magnanimously "donate" hundreds of zero price-tag copies of its software to schools around the province, as it has in many other countries, to ensure that students are brought up to believe that word processing is the same as Word, and spreadsheets are always Excel. But no copying, of course, ("free as in beer" doesn't mean "free as in freedom", does it?) and no peeking inside the magic black box - but then nobody really needs to do that stuff, do they?

And when we see that:

Microsoft si impegna a:

individuare e comunicare alla Regione le iniziative e risorse (a titolo esemplificativo: personale tecnico e specialistico, eventuali strumenti software necessari alle attività da svolgere congiuntamente) che intende mettere a disposizione per sostenere la creazione del centro di competenza congiunto Microsoft-Regione;

[Microsoft undertakes to:

specify and communicate to the Region the initiatives and resources (for example: technical personnel and specialists, software necessary for the joint activities) which it intends to make available for the creation of the joint Microsoft-Regional centre of competence centre]

are we to imagine that Microsoft will diligently provide a nicely balanced selection of PCs running Windows, some Apple Macintoshes, and PCs running GNU/Linux? Will it send along specialists in open source? Will it provide examples of all the leading free software packages to be used in the joint competency centre? Or will it simply fill the place to the gunwales with Windows-based, proprietary software, and staff it with Windows engineers?

The point is the "deal" with Microsoft is simply an invitation for Microsoft to colonise everywhere it can. And to be fair, there's not much else it can do: it has little deep knowledge of free software, so it would be unreasonable to expect it to explore or promote it. But it is precisely for that reason that this agreement is completely useless; it can produce one result, and one result only: recommendations to use Microsoft products at every level, either explicitly or implicitly.

And that is not an acceptable solution because it locks out competitors like free software - despite the following protestations of support for "interoperability":

Microsoft condivide l’approccio delle politiche in materia adottato dalla Regione Puglia ed è parte attiva, a livello internazionale, per promuovere iniziative rivolte alla interoperabilità nei sistemi, indipendentemente dalle tecnologie usate.

[Microsoft shares the approach adopted by the Puglia Region, and is an active part of initiatives at an international level to promote the interoperability of systems, independently of the technology used.]

In fact, Microsoft is completely interoperable only when it is forced to be, as was the case with the European Commission:

In 2004, Neelie Kroes was appointed the European Commissioner for Competition; one of her first tasks was to oversee the fining brought onto Microsoft by the European Commission, known as the European Union Microsoft competition case. This case resulted in the requirement to release documents to aid commercial interoperability and included a €497 million fine for Microsoft.

That's clearly not an approach that will be available in all cases. The best way to guarantee full interoperability is to mandate true open standards - ones made freely available with no restrictions, just as the World Wide Web Consortium insists on for Web standards. On the desktop, for example, the only way to create a level playing-field for all is to use products based entirely on true open standards like Open Document Format (ODF).

If the Puglia region wants to realise its worthy aims, it must set up a much broader collaboration with a range of companies and non-commercial groups that represent the full spectrum of computing approaches - including Microsoft, of course. And at the heart of this strategy it must place true open standards.

Update: some good news about supporting open source and open standards has now been announced.

Follow me @glynmoody on Twitter or identi.ca.

10 September 2010

Project Canvas Will be *Linux* Based

I've been pretty sceptical - and critical - of the BBC's TV over IP efforts, including Project Canvas:

Project Canvas is a proposed partnership between Arqiva, the BBC, BT, C4, Channel Five, ITV and Talk Talk to build an open internet-connected TV platform, subject to BBC Trust approval.

The partners intend to form a venture to promote the platform to consumers and the content, service and developer community.

Like the UK's current free-to-air brands Freeview and Freesat - a consumer brand (not canvas) will be created, and licensed to device manufacturers, and internet service providers owners who meet the specifications.

‘Canvas compliant’ devices (eg set-top boxes), built to a common technical standard, would provide seamless access to a range of third-party services through a common, simple, user experience.

That's despite - or maybe even *because* - it proclaims itself as "open":

A technology project to build an open, internet-connected TV platform

As well as a lack of standards in the internet-connected TV market, there is no open platform. This creates two main problems:

* The UK's current free to air TV platforms Freeview and Freesat have been unable to evolve and keep pace with technical innovation in the consumer electronics industry. While some internet services are emerging on some commercially-owned/ pay-TV platforms - these platforms are working to their own (proprietary) closed standards. A fragmented market is emerging, which could put internet-connected TV out of the reach of consumers who don't want to subscribe to pay-TV.
* The internet services need to have a commercial relationship with the TV platform to obtain a route to the shared screen. This, combined with a fragmented market of varying standards, is slowing the development of internet-connected TV services.

Project Canvas intends to build, run and promote a platform that solves both problems: providing an upgrade for free-to-air TV, and an open platform of scale that will bring a wide range of internet services to the shared screen.

We all know how debased the term "open" has become, so frankly I expected the worst when the technical details were released. Looks like I was wrong [.pdf]:

Linux has been selected as the Operating System for the Device.

Linux has been ported to run on a large number of silicon products, and is currently supported by the vast majority of hardware and software vendors in the connected television ecosystem. Porting to new hardware is a relatively simple due to the architecture of the kernel and the features that it supports. The Linux environment provides the following functionality as a basis for the development and operation of the Device software:

• Multi-processing.
• Real-time constraints and priority-based scheduling.
• Dynamic memory management.
• A robust security model.
• A mature and full-featured IP stack.

Linux is deployed on millions of PCs and consumer electronics devices, and the skills to develop and optimise for it are common in the industry. In addition, a wide range of open source products have been developed for, or ported to Linux.

It's pretty amazing to read this panegyric to Linux: it shows just how far Linux has come, and how it is taking over the embedded world.

Even though content will be "protected" - from you, the user, that is - which means the platform can't really be regarded as totally open, the Project Canvas designers and managers still deserve kudos for opting for Linux, and for publicly extolling its virtues in this way.

Update: I haven't really made clear why that's a good thing, so here are some thoughts.

Obviously, this is not a pure free software project: it's a walled garden with DRM. But there are still advantages for open source.

For example, assuming this project doesn't crash and burn, I expect it will influence similar moves elsewhere in the world, which may be encouraged to use Linux too. Even if that doesn't happen, its use by Project Canvas will increase the profile of Linux, and also the demand for people who are skilled in this area (thus probably helping to drive up salaries of Linux coders.) More generally, the Linux ecosystem will grow as a result of this choice, even if there are non-free elements higher up the stack. Correspondingly, non-free solutions will lose market share and developer mind-share.

And finally, having Linux at the heart of the Project Canvas project will surely make it easier to root...

Follow me @glynmoody on Twitter or identi.ca.

09 September 2010

Welcome to the Civic Commons

One of the core reasons why sharing works is that it spreads the effort, and avoids the constant re-invention of the wheel. One area that seems made for this kind of sharing is government IT: after all, the problems faced are essentially the same, so a piece of software built for one entity might well be usable - or adaptable - for another.

That's the key idea behind the new Civic Commons:

Government entities at all levels face substantial and similiar IT challenges, but today, each must take them on independently. Why can’t they share their technology, eliminating redundancy, fostering innovation, and cutting costs? We think they can. Civic Commons helps government agencies work together.

Why not indeed?

Moreover, by bringing together all the pieces, it may be possible to create something approaching a "complete" solution for government bodies - a "civic stack":

The "civic stack" is a shared body of software and protocols for civic entities, built on open standards. A primary goal of Civic Commons is to make it easy for jurisdictions at all levels to deploy compatible software. Pooling resources into a shared civic stack reduces costs and avoids duplicated effort; equally importantly, it helps make civic IT expertise more cumulative and portable across jurisdictions, for civil servants, for citizens, and for vendors.

Civic Commons is currently identifying and pulling together key elements of the civic stack. If you work in civic IT and would like to suggest a technology or category for the civic stack, please let us know. As we survey what's being used in production, we will adjust this list to emphasize proven technologies that have been deployed in multiple jurisdictions.

It's still early days for all this stuff, but the idea seems so right it must succeed...surely?

Follow me @glynmoody on Twitter or identi.ca.

22 March 2010

Free Software's Second Era: The Rise and Fall of MySQL

If the first era of free software was about the creation of the fully-rounded GNU/Linux operating system, the second saw a generation of key enterprise applications being written to run on that foundation. Things got moving with the emergence and rapid adoption of the LAMP stack – a term coined in 1998 - a key part of which was (obviously) MySQL (the “M”).

On The H Open.

20 April 2009

What are the Legal Implications of Cloud Computing?

To say that cloud computing is trendy would be an understatement: the topic is almost inescapable in the world of computing these days. I've written about it from the viewpoint of open source several times, because there are a number of important issues arising out of clouds: much of their infrastructure is based on free software, and there are interesting questions to do with licensing that clouds pose for applications. But one aspect almost never considered is even higher up the stack: the legal side of their use....

On Open Enterprise blog.

Follow me on Twitter @glynmoody.

13 March 2009

Shining Light on Why Microsoft Loves LAMP to Death

Here's an interesting little tale:


I was fortunate enough to spend last Thursday with a group of LAMP engineers who have some experience with Windows Server and IIS, and who are based in Japan.

The three - Kimio Tanaka, the president of Museum IN Cloud; Junpei Hosoda, the president of Yokohama System Development; and Hajime Taira, with Hewlett-Packard Japan - won a competition organized by impress IT and designed to get competitive LAMP engineers to increase the volume of technical information around PHP/IIS and application compatibility. The competition was titled "Install Maniax 2008".

A total of 100 engineers were chosen to compete and seeded with Dell server hardware and the Windows Web Server 2008 operating system. They were then required to deploy Windows Server/IIS and make the Web Server accessible from the Internet. They also had to run popular PHP/Perl applications on IIS and publish technical documentation on how to configure those applications to run on IIS.

The three winners were chosen based on the number of ported applications on IIS, with the prize being a trip to Redmond. A total of 71 applications out of the targeted 75 were ported onto IIS, of which 47 were newly ported to IIS, and related new "how to" documents were published to the Internet. Some 24 applications were also ported onto IIS based on existing "how to" documents.

So let's just deconstruct that, shall we?

A competition was held in Japan "to get competitive LAMP engineers to increase the volume of technical information around PHP/IIS and application compatibility"; they were given the challenge of getting "popular PHP/Perl applications on IIS", complete with documentation. They "succeeded" to such an extent, that "71 applications out of the targeted 75 were ported onto IIS, of which 47 were newly ported to IIS".

But that wasn't the real achievement: the real result was that a further 47 PHP/Perl apps were ported *from* GNU/Linux (LAMP) *to* Windows - in effect, extracting the open source solutions from the bottom of the stack, and substituting Microsoft's own software.

This has been going on for a while, and is part of a larger move by Microsoft to weaken the foundations of open source - especially GNU/Linux - on the pretext that they are simply porting some of the top layers to its own stack. But the net result is that it diminishes the support for GNU/Linux, and makes those upper-level apps more dependent on Microsoft's good graces. The plan is clearly to sort out GNU/Linux first, before moving on up the stack.

It's clever, and exactly the sort of thing I would expect from the cunning people at Microsoft. That I understand; what I don't get is why these LAMP hackers are happy to cut off the branch they sit on by aiding and abetting Microsoft in its plans? Can't they see what's being done to their LAMP?

22 January 2009

Mobilising Open Source

I've been wittering on about open source mobiles for ages, but here's someone who actually knows what he's talking about:


Whether it be the proliferation of phone development activity around Google’s Android stack, the phenomenal operator gravitation toward the LiMo Foundation, or Symbian’s intriguing announcement to open source its end-of-life cycle stack, the mobile industry is breaking out of the traditional controlled development environment to favor collaboration that accelerates innovation. The use of open source software in mobile is exploding from the operating system all the way up to the user experience, and Linux-based open source stacks are moving well beyond alpha stage with backing by industry heavy weights.

This post is in the context of the Mobile World Congress being held in Barcelona in February:

26 years after GSM was created to design a pan-European mobile technology, Mobile World Congress number 13 is set to take place in Barcelona in February. This time around, as they did when GSM World Congress was first held in Madrid in 1995, mobile network operators will dominate the scene.

Next month, however, the topic of discussion will not be new network deployments, or the latest traunch of jazzy new devices, or the next best application. Rather, Open Source will be topic Number 1 on the operator agenda in 2009.

Good to hear it.

15 January 2009

OLPC: Out; OLPH: In

As regular readers of this blog may have noticed, I've given up on One Laptop Per Child. Happily, I've now come across something to fill the meme-sized hole that leaves:

Gdium.Com is launching the One Laptop Per Hacker program.

Here is your opportunity to contribute to the Gdium revolution.

The Gdium Team is opening a site and a program centralizing all the developer centric resources for the Gdium.

The OLPH program is supporting developers, contributors, creative artists, and other innovators who wish to:

* Optimize, improve the OS, Human Interface and/or Application stack of an “education centric” netbook.
* Experiment with the look and feel
* Provide and disseminate their new application stacks
* Redesign the artwork
* modify the hardware or integrate some nifty gadgets.
* Experiment with the Gdium to support new vertical markets.

For a limited set of selected contributors the Gdium Team provides a set of materials and services, enabling them to get an early start on this machine.

Gdium, in case you were wondering (as I was), are creating the groovy Gdium Liberty, which rather idiosyncratically uses Mandriva (remember that?).

04 December 2008

Microsoft's Tired TCO Toffee

Those with good memories may recall a phase that Microsoft went through in which it issued (and generally commissioned) a stack of TCO studies that “proved” Windows was better/cheaper than GNU/Linux. Of course, they did nothing of the sort, since the methodology was generally so flawed you could have proved anything.

I'd thought that even Microsoft had recognised that this was a very weak form of attack, so I was surprised to come across this....

On Open Enterprise blog.

26 November 2008

Sun's Open Source Appliances

When Sun announced at the beginning of this year that it was buying MySQL for the not inconsiderable sum of a billion dollars, the question most people posed to themselves was how Sun was going to recoup its investment. I was initially worried that Sun might try to push Solaris over GNU/Linux in the LAMP stack, but Sun's CEO, Jonathan Schwartz was adamant that wasn't going to happen.

Now, nearly a year later, we're beginning to see what exactly Sun has in mind....

On Open Enterprise blog.

14 November 2008

ARMed and Dangerous - to Microsoft

It's often forgotten that one of the strengths of GNU/Linux is the extraordinary range of platforms it supports. Where the full Windows stack is only available for Intel processors - even Windows CE, a distinct code-base, only supports four platforms - GNU/Linux is available on a dizzying array of other hardware.

Here's an interesting addition to the list:


ARM and Canonical Ltd, the commercial sponsor of Ubuntu, today announced that they will bring the full Ubuntu Desktop operating system to the ARMv7 processor architecture to address demand from device manufacturers. The addition of the new operating system will enable new netbooks and hybrid computers, targeting energy-efficient ARM technology-based SoCs, to deliver a rich, always-connected, mobile computing experience, without compromising battery life.

The combination of a commercially supported, optimized Ubuntu distribution for ARM, together with Canonical’s ability to tailor solutions to specific ARM technology-based devices and OEM requirements, ensures that highly-optimized systems can be rapidly deployed into the fast growing mobile computing market. ARM’s wide partnership with leading semiconductor and device manufacturers strengthens the mobile computing software ecosystem and extends the market reach for Ubuntu-based products.

Since ARM is based on original work by the ancient Acorn Computers (hello, BBC Micro), this represents a nice coming together of two British-based companies, albeit with global reach.

12 November 2008

Opening up Business Process Management

I wrote earlier this week about the increasing maturity of open source ERP solutions, and how this represented a fleshing out of the open source enterprise stack. An obvious question to ask is: what's going to be the next area of activity? One candidate is business process management (BPM)....

On Open Enterprise blog.

05 November 2008

Why is the BBC Running Microsoft Ads?

I wrote below about Microsoft's rather desperate BizSpark. It all seemed pretty transparent to me. But not to the BBC, apparently, which has fallen hook, line and sinker for the Microsoft line:

"The rising tide of people building new companies, building successful companies using our product is good for us because we share in that over time. The goal is to remove any barriers to getting going." he told BBC News.

Except, of course, there are no barriers to getting going as far as software is concerned, because the LAMP stack has always been there, always free and always excellent - as evidenced by the fact that it's currently running 99.9% of Web 2.0.

But it's obviously too much to expect a technology reporter in Silicon Valley to mention such trivia in the face of the *real* story about Microsoft's perfervid altruism.

If You Can't Beat Them...

...bribe them:


Microsoft BizSpark is a global program designed to help accelerate the success of early stage startups by providing key resources when they need it the most:

* Software. Receive fast and easy access to current full-featured Microsoft development tools, platform technologies, and production licenses of server products for immediate use in developing and bringing to market innovative and interoperable solutions. There is no upfront cost to enroll.

Fortunately, people don't choose the LAMP stack predominantly because it's free, but because it's better.

What next - *paying* people to use Microsoft's products? Oh, wait....

19 September 2008

The *Other* Vista: Successful and Open Source

The is a clear pattern to open source's continuing rise. The first free software that was deployed was at the bottom of the enterprise software stack: GNU/Linux, Apache, Sendmail, BIND. Later, databases and middleware layers were added in the form of popular programs like MySQL and Jboss. More recently, there have been an increasing number of applications serving the top of the software stack, addressing sectors like enterprise content management, customer relationship management, business intelligence and, most recently, data warehousing.

But all of these are generic programs, applicable to any industry: the next frontier for free software will be vertical applications serving particular sectors. In fact, we already have one success in this area, but few people know about it outside the industry it serves. Recent events mean that may be about to change....

On Linux Journal.

29 July 2008

Open Domotics

Marc Fleury has already written computer history once when he set up JBoss with a new model of holding all the copyright in the code - hitherto the coders usually owned their own contributions, as is the case for the Linux kernel - and a bold move up the enterprise stack into open source middleware. That paid off very nicely for him - and why not? - and now he's back with what looks like another very interesting move:

I have been studying a new industry lately, it is called Home Automation or Domotics in Europe. It is really a fancy name to describe the age old problem of "why can't my mom operate my remote". Every self respecting geek has one day felt the urge to program his or her house. Home Automation in the field is lights, AV, AC, Security. Today it is a bit of an expensive hobby, even downright elitist in some cases, but the technology is rapidly democratizing, due to Wifi, Commodity software/hardware, the iPhone and the housing crisis.

Although Fleury is a hard-headed business man who speaks his mind, he's also a true-blue hacker with his geekish heart in the right place:

We are an Open Community in Domotics, product design is rather open. We provide a hardware reference implementation on Java Linux it will help us develop but also provides the physical bridge to IR/RS/Ethernet/wifi. On the software side use JBoss actually as the base for our server leveraging packaging and installation. It is an application of JBoss in a way. We use Java to map protocols.

Open domotics - worth doing for the name alone.