Because of the nature of the Open Source model, there seems to be a higher tendency (unscientificly speaking) to just copy a piece of code and reuse it in another components. This means that if a piece of code turns out to be flawed, not only must it be fixed, but also that maintainers must find every place they might've reused that blob of code. A visual inspection showed me that many of these were the multiple vulnerabilities affecting firefox, mozilla and thunderbird. In a typical example, firefox packages were fixed, then mozilla packages were fixed 4 days later, then thunderbird was fixed 4 days after that.
Note that it says "In a typical example, firefox packages were fixed, then mozilla packages were fixed 4 days later". So one reason why Red Hat has more vulnerabilities is that it has far more packages included, many of which duplicate functions - like Firefox and Mozilla. The point is, you wouldn't install both Firefox and Mozilla: you'd choose one. So there's only one vulnerability that should be counted. Not only that, but Red Hat is penalised because it actually offers much more than Windows.
I don't know what the other vulnerabilities were, but I'd guess they involved similar over-counting - either through duplication, or simply because Red Hat offered extra packages. By all means compare Windows and Red Hat, but make it a fair comparison.
Oh, come on now. You're a mathematician: you know what the actual point is, which is that whichever of the two browsers someone chooses, it will have a bug in it. The actual number of vulnerabilities is irrelevant, because whichever way you turn you're at risk.
ReplyDeleteI also have a serious problem with the idea that Red Hat 'offers much more' than a typical Windows installation. Whether this is true or not - I'm really not about to start counting binaries - the biggest problem with Windows has always been that there is far too much stuff in it. For years, it has been impossible to track and fix all the bugs it contains - so offering more really doesn't seem to be a good thing, even in the flawless, rose-tinted world of the open-source brigade. A more objective criticism would be that Microsoft is every bit as guilty of reusing potentially flawed code as any open source developer - it has banged on about the common code in Office for at least a decade, and I'd bet my fishing rod collection that code reuse is very far from unknown in Windows itself. The FUD's there, all right - I just don't think you're looking for it in the right place...
(I'm just glad I'm not using Debian 3.1, which appears to be among the weightier operating systems out there. I don't believe that even you can claim that 230 million lines of source code are likely to be bug-free).
I'm certainly not claiming that open source is free from bugs (and never have) - I curse Firefox for its memory leak every hour. But that is irrelevant: all I'm talking about here is the methodology someone has used in a blog posting, not the quintessential security or otherwise of either Windows or GNU/Linux.
ReplyDeleteI'm no expert on the details of the Red Hat distro, but I do know a bit about the Knoppix distro: this has 5000 separate programs on the DVD version. I doubt the prodigal Windows offers even a tenth of that. And so if we were counting vulnerabilities in the way this post suggests, I'm pretty sure Knoppix would come out badly, simply because, even on a random basis, it has a tenfold chance of harbouring vulnerabilities. I imagine that Red Hat has similar numbers (the Red Hat site is singularly unhelpful in terms of giving this information, unfortunately).
So what needs to be done is an apple for apples comparison, by using an explicit, matched list of software - operating system, Web server, email server, DNS etc etc - not just a rough Windows vs. GNU/Linux distro. Even then I'm not sure it would tell us much other than the fact that all software has vulnerabilities (yes free software too), but at least the comparison would be a fair one.
The thing that cracks me up is how "selective" your reading of the article seems to be. The paragraph you quote was there primarily to highlight a particular problem where Red Hat fixes one issue in different packages on different days.
ReplyDeleteHowever, all of the counts and charts only reflect *unique* vulnerabilties - specifically, if a vuln affects 12 different packages, and fixed on 3 different days, it is still only counted one time.
Also, go ahead and read the follow up on "Apples, Orange and Vuln Stats" which discusses some of the different viewpoints wrt differences in package composition.
Jeff (http://blogs.technet.com/security)
Thanks for your response.
ReplyDeleteIt's good if you only counted each vulnerability once, but that wasn't apparent to me from the article (did you state the fact somewhere?). I'm happy to withdraw that as a criticism.
As for my other point, I'll read the follow up.
OK, I've read it now - it's an interesting discussion. But I'm afraid I still think the idea of counting all vulnerabilities in a product set is inherently biased against free software.
ReplyDeleteOne of the latter's great strengths is that it can offer a huge number of programs. As I said in a reply above, the Knoppix DVD has 5000 of them.
The point is, you wouldn't install all these - or even a tiny fraction - so counting all their vulnerabilities is really quite meaningless.
The points you make about these kind of comparisons certainly have merit, so I think the best thing we can say about this whole exercise is that it is not ultimately very profitable for either side.
In fact, you could argue it's slightly immature for both sides to get fixated on "beating" the opposition in this crude, quantitative way. The problem is, security doesn't have such a simple metric. Far better for everyone to do a better job for the customer.
Glyn
ReplyDeleteI think I philisophically agree with you - it's one of those "in a perfect world..." statements, though, which we unfortunately don't have.
The problem that sparks me up is the bad assumptions and unsupported assertions (not by you, but in general). It makes me want to check - and I'm open to investigating any reasonable methodology that measures the vendor's success at improving security quality.
That brings me to my second comment. I strongly agree that vendors should just spend more time making things better for customers. 1) How do you tell they are trying?, and 2) how do you measure their progress/success?
Take Oracle, for example. Since early in the Unbreakable campaign, they've described development process that sound very similar to the changes at Microsoft ala the SDL (sec. dev. lifecycle). At MSFT, I can look at vulns and severity of vulns pre- and post- SDL and see improvement. If I look at the Oracle releases 9 & 10g, I see a worsening trend. The metrics tell me they're not walking the walk, just talking the talk
I won't claim my metrics are be-all, end-all, and using vulns as a proxy for security quality does not address the full threat equation (ie, attackers) but there needs to be some to check/define progress on what the vendors can control.
Also, I understand the good dev philosophies followed by Linux/OSS - modularity, (potential) code review, (typically) better usage of least privilege. Those by themselves do not guarantee few overall vulns or good security though. However, if you take even the minimum 250 RH packages in a deployment, they vary widely in terms of testing, code review, etc.
This gets me back to start, with the need for metrics. At the end of the day, how many publicly disclosed vulns end up in the shipping code, potentially raising user risk?
I think the Linux distro vendors have had the opportunity to take the perception of superior security and make it real, but instead they hold back from admitting the issue applies to them and are frittering away the opportunity...
Oh, and don't be shy about criticizing directly on my blog ;-) I'm not going for FUD - I want repeatable analysis that drives both MSFT and Linux/OSS to do better on security.
Best regards ~ Jeff
Questioning assumptions sounds good to me; and you're right, there's a lot of complacency about security in the open source world - and complacency is clearly bad.
ReplyDeletePerhaps you've answered the question with your own comments: the thing that's really relevant - and totally valid - is comparing a single program over time. This is the true measure of how the developers are doing - not by some mythical locking of antlers with the "other side".
And thanks for taking the time to respond to my comments, which were meant to be slightly provocative (this is a blog, after all....), which is partly why I wrote them here and not on your blog, which would have been overly agressive.
Like you, my aim, insofar as I have one is, is to help "both MSFT and Linux/OSS to do better on security".
And I have to say that I'm heartened by the dialogue that has ensued - it reminds me why I while away so many hours at all this stuff....