Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
The Myth of Open Source Security Revisited v2.0

By Carnage4Life in Technology
Tue Feb 12, 2002 at 10:57:49 AM EST
Tags: Software (all tags)
Software

This article is a followup to an article entitled The Myth of Open Source Security Revisited. The original article tackled the common misconception amongst users of Open Source Software(OSS) that OSS is a panacea when it comes to creating secure software. The article presented anecdotal evidence taken from an article written by John Viega, the original author of GNU Mailman, to illustrate its point. This article follows up the anecdotal evidence presented in the original paper by providing an analysis of similar software applications, their development methodology and the frequency of the discovery of security vulnerabilities.

The purpose of this article is to expose the fallacy of the belief in the "inherent security" of Open Source software and instead point to a truer means of ensuring the quality of the security of a piece software is high.


Apples, Oranges, Penguins and Daemons

When performing experiments to confirm a hypothesis on the effect of a particular variable on an event or observable occurence, it is common practice to utilize control groups. In an attempt to establish cause and effect in such experiments, one tries to hold all variables that may affect the outcome constant except for the variable that the experiment is interested in. Comparisons of the security of software created by Open Source processes and software produced in a proprietary manner have typically involved several variables besides development methodology.

A number of articles have been written that compare the security of Open Source development to proprietary development by comparing security vulnerabilities in Microsoft products to those in Open Source products. Noted Open Source pundit, Eric Raymond wrote an article on NewsForge where he compares Microsoft Windows and IIS to Linux, BSD and Apache. In the article, Eric Raymond states that Open Source development implies that "security holes will be infrequent, the compromises they cause will be relatively minor, and fixes will be rapidly developed and deployed.". However, upon investigation it is disputable that Linux distributions have less frequent or more minor security vulnerabilities when compared to recent versions of Windows. In fact the belief in the inherent security of Open Source software over proprietary software seems to be the product of a single comparison, Apache versus Microsoft IIS.

There are a number of variables involved when one compares the security of software such as Microsoft Windows operating systems to Open Source UNIX-like operating systems including the disparity in their market share, the requirements and dispensations of their user base, and the differences in system design. To better compare the impact of source code licensing on the security of the software, it is wise to reduce the number of variables that will skew the conclusion. To this effect it is best to compare software with similar system design and user base than comparing software applications that are significantly distinct. The following section analyzes the frequency of the discovery of security vulnerabilities in UNIX-like operating systems including HP-UX, FreeBSD, RedHat Linux, OpenBSD, Solaris, Mandrake Linux, AIX and Debian GNU/Linux.

Security Vulnerability Face-Off

Below is a listing of UNIX and UNIX-like operating systems with the number of security vulnerabilities that were discovered in them in 2001 according to the Security Focus Vulnerability Archive.
AIX
10 vulnerabilities
Debian GNU/Linux
13 vulnerabilities + 1 Linux kernel vulnerability
FreeBSD
24 vulnerabilities
HP-UX
25 vulnerabilities
Mandrake Linux
17 vulnerabilities + 13 Linux kernel vulnerabilities
OpenBSD
13 vulnerabilities
Red Hat Linux
28 vulnerabilities + 13 Linux kernel vulnerabilities
Solaris
38 vulnerabilities
From the above listing it is clear that source licensing is not a primary factor in determining how prone to security flaws a software application will be. Specifically proprietary and Open Source UNIX family operating systems are represented on both the high and low ends of the frequency distribution.

Factors that have been known to influence the security and quality of a software application are practices such as code auditing (peer review), security-minded architecture design, strict software development practices that restrict certain dangerous programming constructs (e.g. using the str* or scanf* family of functions in C) and validation & verification of the design and implementation of the software. It is having processes like these in place and not the availability of its source code to the public that make OpenBSD the Open Source UNIX operating system with the best security record.

The Road To Secure Software

Exploitable security vulnerabilities in a software application are typically evidence of bugs in the design or implementation of the application. Thus the process of writing secure software is an extension of the process behind writing robust, high quality software. Over the years a number of methodolgies have been developed to tackle the problem of producing high quality software in a repeatable manner within time and budgetary constraints. The most successful methodologies have typically involved using the following software quality assurance, validation and verification techniques; formal methods, code audits, design reviews, extensive testing and codified best practices.
  1. Formal Methods: One can use formal proofs based on mathematical methods and rigor to verify the correctness of software algorithms. Tools for specifying software using formal techniques exist such as VDM and Z. Z (pronounced 'zed') is a formal specification notation based on set theory and first order predicate logic. VDM stands for "The Vienna Development Method" which consists of a specification language called VDM-SL, rules for data and operation refinement which allow one to establish links between abstract requirements specifications and detailed design specifications down to the level of code, and a proof theory in which rigorous arguments can be conducted about the properties of specified systems and the correctness of design decisions.The previous descriptions were taken from the Z FAQ and the VDM FAQ respectively. A comparison of both specification languages is available in the paper, Understanding the differences between VDM and Z by I.J. Hayes et al.

  2. Code Audits: Reviews of source code by developers other than the author of the code are good ways to catch errors that may have been overlooked by the original developer. Source code audits can vary from informal reviews with little structure to formal code inspections or walkthroughs. Informal reviews typically involve the developer sending the reviewers source code or descriptions of the software for feedback on any bugs or design issues. A walkthrough involves the detailed examination of the source code of the software in question by one or more reviewers. An inspection is a formal process where a detailed examination of the source code is directed by reviewers who act in certain roles. A code inspection is directed by a "moderator", the source code is read by a "reader" and issues are documented by a "scribe".

  3. Testing: The purpose of testing is to find failures. Unfortunately, no known software testing method can discover all possible failures that may occur in a faulty application and metrics to establish such details have not been forthcoming. Thus a correlation between the quality of a software application and the amount of testing it has endured is practically non-existent.

    There are various categories of tests including unit, component, system, integration, regression, black-box, and white-box tests. There is some overlap in the aforementioned mentioned testing categories.

    Unit testing involves testing small pieces of functionality of the application such as methods, functions or subroutines. In unit testing it is usual for other components that the software unit interacts with to be replaced with stubs or dummy methods. Component tests are similar to unit tests with the exception that dummmy and stub methods are replaced with the actual working versions. Integration testing involves testing related components that communicate with each other while system tests involve testing the entire system after it has been built. System testing is necessary even if extensive unit or component testing has occured because it is possible for seperate subroutines to work individually but fail when invoked sequentialy due to side effects or some error in programmer logic. Regression testing involves the process of ensuring that modifications to a software module, component or system have not introduced errors into the software. A lack of sufficient regression testing is one of the reasons why certain software patches break components that worked prior to installation of the patch.

    Black-box testing also called functional testing or specification testing test the behavior of the component or system without requiring knowledge of the internal structure of the software. Black-box testing is typically used to test that software meets its functional requirements. White-box testing also called structural or clear-box testing involves tests that utilize knowledge of the internal structure of the software. White-box testing is useful in ensuring that certain statements in the program are excercised and errors discovered. The existence of code coverage tools aid in discovering what percentages of a system are being excercised by the tests.

    More information on testing can be found at the comp.software.testing FAQ .

  4. Design Reviews: The architecture of a software application can be reviewed in a formal process called a design review. In design reviews the developers, domain experts and users examine that the design of the system meets the requirements and that it contains no significant flaws of omission or commission before implementation occurs.

  5. Codified Best Practices: Some programming languages have libraries or language features that are prone to abuse and are thus prohibited in certain disciplined software projects. Functions like strcpy, gets, and scanf in C are examples of library functions that are poorly designed and allow malicious individuals to use buffer overflows or format string attacks to exploit the security vulnerabilities exposed by using these functions. A number of platforms explicitly disallow gets especially since alternatives exist. Programming guidelines for such as those written by Peter Galvin in a Unix Insider article on designing secure software are used by development teams to reduce the likelihood of security vulnerabilities in software applications.
Projects such as the OpenBSD project that utilize most of the aforementioned techniques in developing software typically have a low incidence of security vulnerabilities.

Issues Preventing Development of Secure Open Source Software

One of the assumptions that is typically made about Open Source software is that the availability of source code translates to "peer review" of the software application. However, the anecdotal experience of a number of Open Source developers including John Viega belies this assumption.

The term "peer review" implies an extensive review of the source code of an application by competent parties. Many Open Source projects do not get peer reviewed for a number of reasons including
  • complexity of code in addition to a lack of documentation makes it difficult for casual users to understand the code enough to give a proper review

  • developers making improvements to the application typically focus only on the parts of the application that will affect the feature to be added instead of the whole system.

  • ignorance of developers to security concerns.

  • complacency in the belief that since the source is available that it is being reviewed by others.

Also the lack of interest in unglamorous tasks like documentation and testing amongst Open Source contributors adversely affects quality of the software. However, all of these issues can and are solved in projects with a disciplined software development process, clearly defined roles for the contributers and a semi-structured leadership hierarchy.

Benefits of Open Source to Security-Conscious Users

Despite the fact that source licensing and source code availability are not indicators of the security of a software application, there is still a significant benefit of Open Source to some users concerned about security. Open Source allows experts to audit their software options before making a choice and also in some cases to make improvements without waiting for fixes from the vendor or source code maintainer.

One should note that there are constraints on the feasibility of users auditing the software based on the complexity and size of the code base. For instance, it is unlikely that a user who wants to make a choice of using Linux as a web server for a personal homepage will scrutinize the TCP/IP stack code.

References
  1. Frankl, Phylis et al. Choosing a Testing Method to Deliver Reliability. Proceedings of the 19th International Conference on Software Engineering, pp. 68--78, ACM Press, May 1997. <http://citeseer.nj.nec.com/frankl97choosing.html>

  2. Hamlet, Dick. Software Quality, Software Process, and Software Testing. 1994. <http://citeseer.nj.nec.com/hamlet94software.html>

  3. Hayes, I.J., C.B. Jones and J.E. Nicholls. Understanding the differences between VDM and Z. Technical Report UMCS-93-8-1, University of Manchester, Computer Science Dept., 1993. <http://citeseer.nj.nec.com/hayes93understanding.html>

  4. Miller, Todd C. and Theo De Raadt. strlcpy and strlcat - consistent, safe, string copy and concatenation. Proceedings of the 1999 USENIX Annual Technical Conference, FREENIX Track, June 1999. <http://www.usenix.org/events/usenix99/full_papers/millert/millert_html/>

  5. Viega, John. The Myth of Open Source Security Earthweb.com. <http://www.earthweb.com/article/0,,10455_626641,00.html>

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Related Links
o The Myth of Open Source Security Revisited
o an article
o GNU Mailman
o an article on NewsForge
o disputable that Linux distributions have less frequent
o Security Focus Vulnerability Archive
o 10 vulnerabilities
o 13 vulnerabilities
o 1 Linux kernel vulnerability
o 24 vulnerabilities
o 25 vulnerabilities
o 17 vulnerabilities
o 13 Linux kernel vulnerabilities
o 13 vulnerabilities
o 28 vulnerabilities
o 38 vulnerabilities
o validation & verification
o Z FAQ
o VDM FAQ
o Understanding the differences between VDM and Z
o comp.software.testing FAQ
o written by Peter Galvin in a Unix Insider article on designing secure software
o http://cit eseer.nj.nec.com/frankl97choosing.html
o http://cit eseer.nj.nec.com/hamlet94software.html
o http://cit eseer.nj.nec.com/hayes93understanding.html
o http://www .usenix.org/events/usenix99/full_papers/millert/millert_html/
o http://www .earthweb.com/article/0,,10455_626641,00.html
o Also by Carnage4Life


Display: Sort:
The Myth of Open Source Security Revisited v2.0 | 83 comments (79 topical, 4 editorial, 0 hidden)
High level languages (4.36 / 11) (#1)
by Paul Johnson on Tue Feb 12, 2002 at 06:35:18 AM EST

One good option for security not mentioned in the article is the use of high level languages.

Languages like Eiffel, Python, Haskell and even Perl are high level because they don't require the user to worry about minor details like memory management. This has two major advantages:

  • Higher productivity because less programmer time is spent thinking about details that are better handled by machines.
  • Fewer bugs because automatic processes don't make mistakes (well, nowhere near as many). This particularly applies to security bugs because these bugs are not detected by testing functionality.
Fewer bugs means fewer security bugs. I've not done any statistics, but my broad impression is that somewhere around 50% of the security reports I see refer to buffer overruns. High level languages make buffer overruns impossible by giving you real string and array handling instead of making you wrangle chunks of memory by hand.

So why are people still programming in C and C++? Partly its herd instinct: for any individual company or coder the costs of going against the herd are probably higher than the costs of going with them, even if the herd is collectively headed in the wrong direction. And partly its an over-concern with low-level efficiency. Fred Brooks wrote about this trade-off between programmer time and machine cycles in The Mythical Man Month. He argued that most of the time the programmer costs more than the machine, and so should be optimised.

High level languages are not a panacea for security of course. Security problems pop up at all levels of software. But thats no excuse for not tackling some of them.

Paul.
You are lost in a twisty maze of little standards, all different.

What the Mailman article said (4.00 / 3) (#4)
by mlinksva on Tue Feb 12, 2002 at 07:03:37 AM EST

The article referenced above by the author of Mailman says the same thing:
The open source movement hasn't made the problem of buffer overflows go away. But eventually, newer programming languages may; unlike C, modern programming languages like Java or Python never have buffer overflow problems, because they do automatic bounds checking on array accesses. As with any technology, fixing the root of the problem is far more effective than any ad hoc solution.
Tools like splint may help if you need to use C. I just heard about splint via the EROS mailing list. FWIW EROS' "capability security" model attempts to fix the root (pun intended) of some other endemic security problems.
--
imagoodbitizen adobe unisys badcitizens
[ Parent ]
Who watches the watchers? (4.75 / 4) (#6)
by treefrog on Tue Feb 12, 2002 at 07:43:45 AM EST

Or in this context, what guarantees has one got that the libraries are well written without any security (or other) flaws.

You still need testing, and you may still want to apply formal methods to verify your libraries, or your compiler, or your interpreter.

But of course, once this has been done, then you can use them with more confidence.

The choice of language, if of course determined by many factors, including legacy code, interfacing, expertise within the development, testing and support organisations, library availability and execution speed. In some case the correct decision is to use a low level language, in others a high level language is better. As Paul points out, everything is a trade off.

regards, treefrog


Twin fin swallowtail fish. You don't see many of those these days - rare as gold dust Customs officer to Treefrog
[ Parent ]

i trust the perl team (4.50 / 4) (#24)
by clark9000 on Tue Feb 12, 2002 at 10:21:46 AM EST

Or in this context, what guarantees has one got that the libraries are well written without any security (or other) flaws.

There is no guarantee, but I agree with the original poster for a few reasons. One is, if there is a security problem in perl (or python or whatever), as opposed to my perl code, it will probably be found a lot faster. Then all I need to do is install the corrected version of perl, and chances are I won't need to make changes to my code (of course it's possible I will). The other thing is the division of labor. As the typical developer, I try to be security conscious, but I'm also worried about getting my app to work, whatever it is. I could be wrong on this but I would think the perl developers are more concerned with low-level issues, such as, say, memory management. While as a C programmer I have to know about memory management, the principal work of my application isn't memory management--it's counting widgets or whatever. Meanwhile, a big job of the perl interpreter is managing memory. If it can't manage memory correctly, as compared to its body of functionality, this is a big problem. Whereas in my little program, if it counts widgets, but doesn't quite handle memory correctly, it's still a problem but at least psychologically it seems like a smaller problem.

Everything is a tradeoff, but I would say that if one of your main criteria is not to have any buffer-overflow problems, I'd go with a scripting language.

Anyway, all that said, certain scripting languages introduce a new type of security problem, such as the way that PHP does certain things automatically such as putting all get, post and cookie variables into the global namespace. This is convenience you won't get in say C or Java, but you have to be conscious of it, otherwise you can screw yourself.
_____
Much madness is divinest sense
To a discerning eye;
Much sense the starkest madness.

-- E. Dickinson
[ Parent ]
Good Points where made here (none / 0) (#63)
by Randall Burns on Wed Feb 13, 2002 at 01:45:39 AM EST

The major reason why I think folks tend not to use high-level languages is the lack of standardization in this area. The most commonly used higher level languages is Perl. Javascript has aspects of a high level language(and is supported by a solid standardization process)-but has some serious limitations(even if the folks at TPI are addressing many of those limitations). There have been literally dozens of commercial and non-commercial scripting languages. The end results is that many managers choose languages like C, C++ or Java they feel they know will be around in a few years.

RJB

[ Parent ]

Formal methods (3.62 / 8) (#2)
by Estanislao Martínez on Tue Feb 12, 2002 at 06:53:40 AM EST

Well, I'm asking out of ignorance mostly, but how helpful really are formal methods? I understand that they just help the implementor be confident that his program meets a specification. But which proportion of security problems arises because of faulty specifications in the first place, which no amount of formal proof will correct? (Well, unless the specification is inconsistent... but this is not the scenario I have in mind.)

My guess is also that they should be much easier to apply for proving the correctness of lower-level functions than a whole, complex system, right?

--em

correctness (4.60 / 5) (#12)
by wiredog on Tue Feb 12, 2002 at 08:30:00 AM EST

In general, if the low level functions are correct, then so are the high level components that use them. Formal methods, especially code review, can be extremely useful. Now, you're not going to do code review on all of a 100,000 line program, but you will do it on the 200 line components that go into it. After you review a component you check it into the library and you don't touch it again unless you also do a code review on it, and then on any component that depends on it.

The aim, at least for me, isn't complete avoidance of failure, because that isn't going to happen, it's to have fail-safe code. Every function is in a try/catch block. Every array is checked for allocation/deallocation and buffer overflow.

Yes, this stretches out the development time. Yes, it's a hassle. Yes, I haven't had "segfault, core dumped", or equivalent, caused by one of my programs in years.

Peoples Front To Reunite Gondwanaland: "Stop the Laurasian Separatist Movement!"
[ Parent ]

RE: Formal methods (4.40 / 5) (#20)
by gazbo on Tue Feb 12, 2002 at 09:39:43 AM EST

If by formal methods you are referring to Z and VDM as mentioned in the article, I can tell you they are a pain in the arse.

Some systems have been successfully developed using these techniques (IIRC the French Metro system's software - the safety critical parts - used either Z or VDM, but I'm stretching my memory here and so could easily be utterly wrong) however there is a trade off between the cost of a bug, and the cost of using rigorous methodologies.

In a safety critical system, it makes sense to spend the extra time doing this, as the cost of failure could be terrible. In a server application, you would be talking about a huge increase in development time (=cost) to cut out a few bugs, even fewer of which will be security threats, and even fewer of which will be exploited.

Also, writing (and even reading) Z or VDM is very different to coding. That lead developer at your company who's experience is purely commercial may be a fantastic coder, but as soon as you want to use VDM, you'd get better milage from a mathematician. I've written a (toy) application using VDM and I assure you I am not exaggerating; we are talking pure mathematics right up until the last step. The only benefits a programmer bring is the ability to pull the reification process in the right direction to produce efficient, implementable code.

In their current incarnations, formal methods are not suitable for mainstream use - the benefits are so much lower than the price.

-----
Topless, revealing, nude pics and vids of Zora Suleman! Upskirt and down blouse! Cleavage!
Hardcore ZORA SULEMAN pics!

[ Parent ]

excellent (3.42 / 7) (#3)
by treefrog on Tue Feb 12, 2002 at 07:01:03 AM EST

Good topic. Glad to see a bit of an intro into the different sorts of testing, and also into formal methods in software. Don't be afraid of them. If you are pragmatic, then you will recognise the formal methodology as a way of working out what should go into ASSERT statements (in c/c++).

best regards, treefrog


Twin fin swallowtail fish. You don't see many of those these days - rare as gold dust Customs officer to Treefrog

Testing (4.42 / 7) (#5)
by Bad Harmony on Tue Feb 12, 2002 at 07:04:01 AM EST

Thus a correlation between the quality of a software application and the amount of testing it has endured is practically non-existent.

I have to disagree with this. There are statistical models that can be used to predict software reliability based on test data. This can be used to measure the current reliability of a system and predict how much testing will be needed to meet a system reliability requirement. These techniques have been used for high-reliability systems such as telephone switch software. See this page for an introduction to the subject.

54º40' or Fight!

This qualifies as FUD, (4.68 / 16) (#7)
by DeHans on Tue Feb 12, 2002 at 07:49:42 AM EST

at least everthing you say up until "The Road To Secure Software".

You state:
However, upon investigation it is disputable that Linux distributions have less frequent or more minor security vulnerabilities when compared to recent versions of Windows.

Excuse me?? Upon investigation of the SecurityFocus page you linked to in this paragraph, I came to the conclusion that it is impossible to come to any conclusion based on these numbers:

Quote 1:
There is a distinct difference in the way that vulnerabilities are counted for Microsoft Windows and other operating systems. For instance, applications for Linux and BSD are often grouped in as subcomponents with the operating systems that they are shipped with. For Windows, applications and subcomponents such as Explorer often have their own packages that are considered vulnerable or not vulnerable outside of Windows and therefore may not be included in the count. This may skew numbers.
Quote 2:
The numbers presented below should not be considered a metric by which an accurate comparison of the vulnerability of one operating system versus another can be made.
And yet, after you "investigated" the numbers you think the claim that is "disputable". That is your right of course, but *not* based on the numbers of SecurityFocus.

And then you go on to say:
From the above listing it is clear that source licensing is not a primary factor in determining how prone to security flaws a software application will be. Specifically proprietary and Open Source UNIX family operating systems are represented on both the high and low ends of the frequency distribution.

No, from the above listing nothing becomes clear. You just have quoted a number of reports without truelly determining what the numbers mean.

Take OpenBSD vs. FreeBSD for instance. According to your list 13 vulnerabilites were found in OpenBSD, and 24 in FreeBSD in 2001.
However, when looking at the details we find that 7 vulnerabilities occured in *both* OS's, so the list should read:
  • 6 vulnerabilities in OpenBSD
  • 17 vulnerabilities in FreeBSD
  • 7 vulnerabilites in OpenBSD/FreeBSD

Repeat for all other OS's. It is inherent in Open Source that good code gets reused. Therefore a vulnerability in a single piece of code will likely result in several security advisories, one for each distribution in which the code is used.

Apart from that there are fundamental differences between different vulnerabilities. Taking once again OpenBSD in account: Of the 13 vulnerabilities, 5 were DoS attacks, 3 local exploits, 2 remote exploits and 3 other attacks. I don't like a DoS, but I prefer it to a remote exploit!!! Of the 13 vulnerabilities listed, only 6 are cureently known to be exploitable!! The other 7 are exploitable in theory, but no one has seen or coded an exploit yet. And only 6 of the advisories dealt with default installations, the rest were with optional components. All in all, 13 doesn't say much.

I agree with you that to create secure code, a drastic change in mindset is needed, and I even agree with your solutions, but the first part is just FUD imnsho.

Dictionary... (3.00 / 5) (#9)
by Carnage4Life on Tue Feb 12, 2002 at 08:16:40 AM EST

You state:
However, upon investigation it is disputable that Linux distributions have less frequent or more minor security vulnerabilities when compared to recent versions of Windows.
Excuse me?? Upon investigation of the SecurityFocus page you linked to in this paragraph, I came to the conclusion that it is impossible to come to any conclusion based on these numbers:


I can only assume that English isn't your first language since you completely failed to understand the meaning of the word DISPUTABLE in that sentence.

Take OpenBSD vs. FreeBSD for instance. According to your list 13 vulnerabilites were found in OpenBSD, and 24 in FreeBSD in 2001. However, when looking at the details we find that 7 vulnerabilities occured in *both* OS's, so the list should read:

And some advisories were for multiple Unices and Linux distros. I wasn't about to break them all apart like that as long as the fundamental idea remained the same.

Apart from that there are fundamental differences between different vulnerabilities. Taking once again OpenBSD in account: Of the 13 vulnerabilities, 5 were DoS attacks, 3 local exploits, 2 remote exploits and 3 other attacks. I don't like a DoS, but I prefer it to a remote exploit!!! Of the 13 vulnerabilities listed, only 6 are cureently known to be exploitable!! The other 7 are exploitable in theory, but no one has seen or coded an exploit yet. And only 6 of the advisories dealt with default installations, the rest were with optional components. All in all, 13 doesn't say much.

"Exploitable in theory" is a very dubious term. No one has written an Outlook virus that deletes all the .doc, .xls, .cpp, .java and .html files on a user's harddrive or utilized one of the recently unpatched MSIE bugs to do something truly malicious but that somehow doesn't lessen their impact or imply that doing so is theoretical.

I do concede that it would have been beneficial to group the bugs by categories such as remote vs. local, patched vs. unpatched, etc. but I was pressed for time when writing this and the BugTraq site was extremely slow to respond to requests at the time. It would have taken hours to do that which I did not have and do not have now.

[ Parent ]
Yes, I'm a non-native English speaker, (4.83 / 6) (#11)
by DeHans on Tue Feb 12, 2002 at 08:25:41 AM EST

but disputable means (as far as I understand it :-) that it is "open for discussion". But there can be no discussion based on the numbers of SecurityFocus. The numbers are so skewed that any discussion is fruitless because it's base (the numbers) are already disputable. Am I still making sense?

[ Parent ]
Doh (4.28 / 7) (#17)
by DeHans on Tue Feb 12, 2002 at 08:59:11 AM EST

Should have read your whole comment before replying:
And some advisories were for multiple Unices and Linux distros. I wasn't about to break them all apart like that as long as the fundamental idea remained the same.
Imho it is very important, it is one of the aspects where Open Source outshines closed source. Exactly because code is reused in different projects, automatically increases the number of eyes able to watch for security problems: FreeBSD users benefit from the code audits of OpenDSB, etc. Looking on the numbers this way gives a complete different view of the security issues.
"Exploitable in theory" is a very dubious term. No one has written an Outlook virus that deletes all the .doc, .xls, .cpp, .java and .html files on a user's harddrive or utilized one of the recently unpatched MSIE bugs to do something truly malicious but that somehow doesn't lessen their impact or imply that doing so is theoretical.
But there is a great difference between "exploitable but not yet found in the wild", and "may be exploitable". Take for instance the most common security issue: buffer overflows. If a buffer overflow is found, a security advisory is in order (preferably after the patch). At best, only the application in which the overflow is found can be crashed. Depending on what process the overflow occurs in and how the overflow occurs however, everthing from crashing the box, up unto 0wning of the box can happen. So if OS A has 14 remotely exploitable buffer overflows, and OS B has 26 buffer overflows for which no known exploits exists, I'd posit that OS B is more secure than OS A.
It would have taken hours to do that which I did not have and do not have now.
Which is *exactly* why I called it FUD. You haven't done enough research to back up your claims with numbers, and draw conclusions from the numbers you do have, which cannot be substantiated, as the numbers are skewed, as admitted by the *source* of the numbers.

[ Parent ]
The numbers (4.66 / 3) (#39)
by Miniluv on Tue Feb 12, 2002 at 03:13:28 PM EST

Lets put the numbers aside for a moment. First we need to create a level playing field, or as close to that as possible, before comparing numbers. We need to understand what vulnerabilities the term "Windows" encompasses. Is it out of the box Windows, Windows at the current Service Pack version plus all subsequent hotfixes, or what? Is it Windows plus any applications that'll run on Win32? How about Linux? Is it just the kernel, the kernel plus "base" userland, the full RedHat Workstation installation, what?

These numbers happen every year. Every year Slashdot posts that absolutely amazing statistic that Linux is "less" secure than Windows or Solaris or AIX or OSF/1 or VMS or DOS depending on which special interest web site chose to massage the numbers.

Ultimately, I would say that number of vulnerabilities shows absolutely nothing about security. Do we want to notice that OpenBSD team members claim to have close "hundreds" of undisclosed holes during 2001 without really mentioning them because they were believed to be unexploitable, or had not yet been exploited and thus could be silently closed? How about the same being true for hundreds of other *nix agnostic open source apps?

This is a good article that attempted to base a premise on minimal research into questionable figures. The meat of the article was excellent and totally non-dependent on your hand waving. It does need to be recognized however that you can, with enough time and determination, find a statistical model for anything that'll tell you exactly what you want to hear.

Some things are holy, and the sauna is one of them
[ Parent ]

Examples need to support the argument!! (4.00 / 4) (#13)
by agentk on Tue Feb 12, 2002 at 08:30:07 AM EST

The author's goal is good: to bring some more accurate methods to analyzing security. The problem is that security holes discovered is not the same as security holes present, or a measure of likelihood of data loss, or other damage (different types of bugs allow different types of attacks).

This is the primary fallacy of 'security analysis' stats. As the author says near the beginning, there are many variables and too many unknowns will support possibly erroneous conclusions.

By the numbers, there, I should switch all my servers to AIX as fast as I can -- but could this just mean that not as much effort is going into finding the holes in AIX as opposed to *BSD, SunOS, Linux or Windows? What percentage of those holes are patched, and how soon after they are discovered?


These need to be taken into account, perhaps in the more scientific approach that the author seems to advocate -- but the examples need to support your argument.

[ Parent ]
"good" (3.33 / 3) (#14)
by streetlawyer on Tue Feb 12, 2002 at 08:38:29 AM EST

It is inherent in Open Source that good code gets reused. Therefore a vulnerability in a single piece of code will likely result in several security advisories, one for each distribution in which the code is used.

This is clearly some new sense of the word "good" that I am unfamiliar with.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Aah, (3.00 / 2) (#18)
by DeHans on Tue Feb 12, 2002 at 09:11:41 AM EST

but then you have never read much dutchisms :-).

"Goede code" translates nicely to "good code", but perhaps some of the meaning is lost in the translation. How would you describe "code which is readable, performs as specified and is not bloated" other than "good code"?

Note that I do not state that resused code is automatically "good code" (think BIND :-).

[ Parent ]
I think you misunderstood (3.00 / 1) (#27)
by vrt3 on Tue Feb 12, 2002 at 11:32:50 AM EST

You talked about reusing good code, and about vulnerabilities in that same code. Code with vulnerabilities can not be good code, not even in Dutch. Toch niet in mijn woordenboek ;-)
When a man wants to murder a tiger, it's called sport; when the tiger wants to murder him it's called ferocity. -- George Bernard Shaw
[ Parent ]
i think you both misunderstood (4.00 / 1) (#32)
by tfogal on Tue Feb 12, 2002 at 12:22:35 PM EST

i think the reference was being made to HHGG, i believe when Arthur is first on a spaceship (which he got on to avoid being blown up with the earth).. correct me if im wrong.

Anyhow, Ford tells him something to the effect of 'everything is alright, everythings good', and Arhur retorts w/ streetlawyer's comment..


I think kuro5hin will gradually turn into a forum for the discussion of professional wrestling. --
[
Parent ]
two points (4.83 / 6) (#15)
by streetlawyer on Tue Feb 12, 2002 at 08:44:07 AM EST

1) The Open Source gang could help their credibility by ceasing to use and desisting from using the term "peer review" in a misleading sense. Peer review is review by peers; it comes from academic practice of systematically having other experts look at a piece of research, with those referees typically promising to examine it with a set degree of diligence. Academic journals do not typically hand out unpublished articles to the general public in the hope that one of them will provide a credibility check (not least because that would involve them in the logical conundrum of publishing pre-publication material). Might I suggest "General Review" as a term which does not invest the Open Source process with spurious borrowed credibility.

2) Who cares about Open Source security these days? Even Slashdot, canonical home of the Open Source zealot, doesn't follow the full disclosure principle with regard to vulnerabilities in its own code.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever

With all due respect, (4.50 / 6) (#21)
by DeHans on Tue Feb 12, 2002 at 09:56:47 AM EST

but full disclosure isn't an issue on Open Source projects. I can do a diff to see what code has been changed, thereby determining exactly what the vulnerability was. If I do not have the skills to do that, I can hire someone to do it for me.

I do not have that option however with closed soruce products. I, as an admin, am fully at the mercy of the vendor. If the advisory is very limited, I may not even be able to test if I am vulnerable!!!

[ Parent ]
Uhm (5.00 / 1) (#40)
by Miniluv on Tue Feb 12, 2002 at 03:17:12 PM EST

Full disclosure is about warning users that a hole exists and where, if anywhere, it is fixed. It has nothing to do with whether I can go watch the website, or sourceforge site or freshmeat project, for every piece of software on my network and then diff the code looking for format string fixes, buffer overflow fixes, and so on. Nor do I have the time to read the changelog on every such project looking for "security" somewhere in it.

Instead I subscribe to bugtraq and keep an eye out for software I use. Then I compare versions, maybe modify the proof of concept to use as a testing tool, and patch as necessary. Full disclosure is a must for source available and source unavailable projects.

Some things are holy, and the sauna is one of them
[ Parent ]

Good link (4.00 / 2) (#25)
by darkbrown on Tue Feb 12, 2002 at 10:45:11 AM EST

I found this quote from Jamie Mcarthy in the follow up section hilarious.

The following patch for 2.0.x sites has not been tested, but should work and is recommended to be applied immediately until a new 2.0.x version can be released:



[ Parent ]
No, the term is used correctly (none / 0) (#76)
by BrentN on Wed Feb 13, 2002 at 04:54:34 PM EST

The Open Source and Free Software communities are correct in using the term "peer review". While many people look at the code, the people who submit changes and comments are typically peers - people who are at least somewhat experienced in writing and maintaining the same type of code.

I can speak from experience that in most large academic journals, the "peer" reviewers either (a) don't understand your work or (b) are your direct competitors and will not 'pass' your research because they don't want to be scooped.

So, I don't think its fair to say that the OSS/FSF guys are 'borrowing' credibility. In many cases, their peer review process is more credible

[ Parent ]

Logic (4.83 / 6) (#16)
by blues is dead on Tue Feb 12, 2002 at 08:45:55 AM EST

Sentence 1: Unfortunately, no known software testing method can discover all possible failures that may occur in a faulty application and metrics to establish such details have not been forthcoming.

Sentence 2: Thus a correlation between the quality of a software application and the amount of testing it has endured is practically non-existent.

Sentence 2 does not follow from 1. Testing clearly doesn't need to be perfect, to still say that a tested application is likely to have more quality than its untested counterpart.

I think Carnage4Life is trying to argue that developers should think of quality as something in the sourcecode, hidden from the user. But then he's just ignoring the quality that the user sees, which pretty much the point of software, isn't it?

Maybe there is some deeper truth in what Carnage4Life claims, but he needs to put his finger on it.

Attitude (4.83 / 6) (#23)
by CaptainZapp on Tue Feb 12, 2002 at 10:10:39 AM EST

Well, what I'm completely missing in your summary, is a bad attitude problem on Microsofts side.

While a Linux distributor is usually rather quick to admit vulnerabilities and deliver appropriate patches, Microsoft still seems to treat security issues primarily as a public relations problem.

Frankly: A company in the process of rolling out a new key product and knowingly hiding a disastrous security vulnerabilty from the public for 5 weeks in order not to endanger the rollout (or is there any other explanation for such behavior?) has totally, utterly and for the future to come lost my trust.

The attempt to demonize full disclosure (which actually evolved due to the ilks of MS et al, ignoring advisories) might be convenient, but it doesn't lend them a lot of credibility.

looks decent, +1 FP from me (3.83 / 6) (#26)
by pb on Tue Feb 12, 2002 at 10:48:36 AM EST


The only beef I have from my skimming of the article is that counting security vulnerabilities is a truly meaningless statistic in assessing the security of an OS. A more interesting statistic would be looking at how quickly these said vulnerabilities get fixed, updated, and preferably to the users.

At least then you can assess how secure a server with a competent administrator who applies patches is on different platforms, which is about as secure as anyone can expect these days. Just assume that the admin patches his system once a day, on weekdays, in the morning, and see how long the exploits would have gone unpatched, and how dangerous they are, and you can get a rough percentage of vulnerability.

Editorial:
The underlines in the references look broken; editors, please fix?
---
"See what the drooling, ravening, flesh-eating hordes^W^W^W^WKuro5hin.org readers have to say."
-- pwhysall
Lies, damn lies and statistics. (4.62 / 8) (#28)
by Tezcatlipoca on Tue Feb 12, 2002 at 11:34:03 AM EST

Look, to say "an OS had 50 vulnerabilities last year" is to say absolutely nothing.

You have to somehow weigh those vulnerabilites, measure them in an objective independent manner, to try to arrive to any kind of conclussion.

Vulnerabilities are not voters. It is not a case of "one vulnerability, one vote" against any given OS.

I agree with you main thesis, but the way you tried to make a case for it was quite frankly pretty lousy.

Also, although most people that have been around here for at least a couple of weeks will surely know who you are, it would not have harmed to disclose where you work because I think it is pretty pertinent regarding this matter.


---
"At eighteen our convictions are hills from which we look;
at forty-five they are caves in which we hide." F. Scott Fitzgerald.
OpenBSD? (4.00 / 4) (#29)
by tfogal on Tue Feb 12, 2002 at 12:10:08 PM EST

I dont really follow OpenBSD, I dont even run it, but from visiting sites and through word of mouth, I find it VERY hard to believe it had 13 vulnerabilities, of any severity.

OpenBSD's developers regularly audit the code. They try to break into their own systems, look for insecure strcpy()s, and all such other security stuff that I would know little about firsthand :P.

Anyhow, OpenBSD has the reputation of being _THE_ secure UNIX. From what I remember, they prided themselves in not having a vulnerability in the default distribution for something like four years.

Like many other people, I would think, I agree that there is no basis that open source software is inherently more secure than closed source. I think you need to go a little more in depth in such an article. For one, I'd like to see the vulnerabilities spelled out, so we can assess ourselves how severe of a problem it is. Also, I think you need to need to compare more software from more vendors, particularly on the proprietary side of the issue. As it stands, this article sounds a lot like "C'mon guys, stop picking on Microsoft... Open source isnt perfect!".

I will look forward to a resubmission with more depth...

I think kuro5hin will gradually turn into a forum for the discussion of professional wrestling. --
OpenBSD (none / 0) (#52)
by JatTDB on Tue Feb 12, 2002 at 07:31:24 PM EST

Actually, I believe the claim is "no remote root vulnerabilities", not just "no vulnerabilities". There have been a number of vulnerabilities that can be exploited from user accounts. Most have not been severe, and at least some were purely theoretical holes...no known exploit.

[ Parent ]
Oh goody! (3.69 / 13) (#30)
by quartz on Tue Feb 12, 2002 at 12:10:28 PM EST

I thought Open Source software was secure, but now that it's been proven that's just a myth, I think I'm gonna install Windows on all the computers in my IT department. I mean, the article is right: who cares if I have to wait months for a gaping hole in Windows to be closed, who cares if I'm not allowed to begin my emails with the word "begin", who cares if my software vendor thinks I'm a criminal if I disclose a bug, who cares if my OS is the world's most popular platform among virus writers, who cares if my company's internal documents are flying all over the Internet because my trusty, secure OS made it so easy to write something like SIRCAM, who cares if I have no friggin' clue what goes on behind the pretty interface and that I have to take the word of Steve "monkeyboy" Ballmer for it, who cares if I have to basically take my business offline whenever the BSA feels like auditing me, who cares if I have to give up my firstborn if I violate the 200 page EULA, but Microsoft is in no way responsible if their buggy database engine decides to "forget" some of my data, who cares if I have to upgrade whenever my vendor feels like it, or pay through the nose for every damn piece of software I use, who cares if my secure OS is the laughing stock of every damn script kiddie on the planet, I mean who the hell cares about those things?

What REALLY matters is that it's been SCIENTIFICALLY proven that open source software has the same exact NUMBER of security holes as proprietary software. Yeah, that's what matters. And that proprietary software is more carefully developed and scrutinized than open source software. And that's a fact, because my vendor said so. And if you can't trust a big fat greedy monopoly, then who can you trust?



--
Fuck 'em if they can't take a joke, and fuck 'em even if they can.
Ibid. (4.00 / 1) (#44)
by hillct on Tue Feb 12, 2002 at 04:20:10 PM EST

Yah, What he said.

I was going to wax poetic along the sames lines but readers would be better served by re-reading the above comment.

That's all folks.
--CTH


--Got Lists? | Top 31 Signs Your Spouse Is A Spy
[ Parent ]
WTF? (3.00 / 5) (#45)
by Carnage4Life on Tue Feb 12, 2002 at 04:22:53 PM EST

What exactly was the point of your virulent anti-Microsoft tirade? I write an article about how OpenBSD is more secure than other Open Source projects because they follow certain practices and you respond with ranting about Microsoft. I don't get it.

Maybe it's all part of some secret Microsoft plot to get all the Linux users to switch to OpenBSD only for Theo De Raadt to take of his mask Scooby Doo style and reveal himself to be Steve Ballmer.

;)

[ Parent ]
Who knows? (5.00 / 3) (#48)
by quartz on Tue Feb 12, 2002 at 04:58:46 PM EST

What exactly was the point of your virulent anti-Microsoft tirade?

Frankly, I don't know. I know I was trying to make a point, but for the life of me I can't remember what it was. Maybe I was trying to launch a personal attack directed at you. Or perhaps I completely missed the point of your article. Yeah, that must be it. Your article was about OpenBSD versus Open Source, and silly me, I thought it was about Windows vs. Open Source. What on Earth could have posessed me to think that? Could it be this?

However, upon investigation it is disputable that Linux distributions have less frequent or more minor security vulnerabilities when compared to recent versions of Windows.

Maybe. Or this?

In fact the belief in the inherent security of Open Source software over proprietary software seems to be the product of a single comparison, Apache versus Microsoft IIS.

Possibly. Or maybe I was trying to point out that "inherent security" is quite a broad concept, and that a simple look at the number of vulnerabilities combined with a couple of unwarranted assumptions about OSS developers being ignorant and complacent does not in fact mean much towards establishing security merits of any kind of software.

Or maybe I was trying to show that if you look at the issue from a more pragmatic point of view you could discover that source licensing and source code availability are, in fact, indicators of the security of a software application, if only because a company is always more interested in its self image than in the security of its users' data, whereas for a group of OSS developers the security of users' data is their self-image.

Or maybe I just felt the need to pointlessly rant against Microsoft, because even though I stopped using their products five years ago, I have a repressed urge to play Age of Empires. Who knows? There's a lot of points that I could have been trying to make, too bad I can't remember which one it was.



--
Fuck 'em if they can't take a joke, and fuck 'em even if they can.
[ Parent ]
Who cares (none / 0) (#68)
by salsaman on Wed Feb 13, 2002 at 08:54:54 AM EST

Anti-M$ rants are fun, and good for relieving stress :-)

[ Parent ]
Ruh-roh (5.00 / 1) (#74)
by Eccles on Wed Feb 13, 2002 at 03:53:02 PM EST

"And I would have gotten away with it too, if it hadn't been for you meddling K5'ers!"

[ Parent ]
Testing Methods (4.50 / 4) (#31)
by rusty on Tue Feb 12, 2002 at 12:17:19 PM EST

You forgot a testing method. I call it "The Million Monkeys Methodology" (or the 3M for short). This testing technique involved coding new software, doing some basic tests ("Doe it compile? Good! Ship it!") and then releasing it for the instant use of tens, or perhaps hundreds of thousands of users. Formally, this testing methodology relies on Brownian user action to randomly explore every possible interaction path within the code. Watch carefully, and in no time (well, n^O time, at least) any bugs or vulnerabilities in the code are bound to appear.

____
Not the real rusty
Never heard. (4.28 / 7) (#33)
by Rainy on Tue Feb 12, 2002 at 01:09:43 PM EST

I never heard OS being a panacea for software security, except when someone's erecting a strawman, like say this story. The claim that I *have* heard is that widely used OS software tends to be more secure than widely used closed software. And I believe experience shows this claim to be true.
--
Rainy "Collect all zero" Day
An example from the article.... (none / 0) (#43)
by Carnage4Life on Tue Feb 12, 2002 at 04:18:11 PM EST

I never heard OS being a panacea for software security, except when someone's erecting a strawman, like say this story.

Or when the person is Eric Raymond writing a Newsforge article. The link is in the article.

The claim that I *have* heard is that widely used OS software tends to be more secure than widely used closed software.

You mean like sendmail, wu-ftpd and BIND?

[ Parent ]
Where exactly in the article? (none / 0) (#75)
by Rainy on Wed Feb 13, 2002 at 04:26:22 PM EST

I scanned through it but I don't see him using that word or implying that OS *cures* insecurity. Instead, as I said, his point is that it makes things more secure than closed source. Panacea is something that absolutely cures an illness, no ifs, no buts.
--
Rainy "Collect all zero" Day
[ Parent ]
open source IS inherently more secure (2.25 / 4) (#34)
by Ender Ryan on Tue Feb 12, 2002 at 01:52:15 PM EST

IMHO, open source IS inherently more secure because you are not at the mercy of a company who does not have your best interests at heart. With open source, you are at the mercy of yourself. You can test the software yourself, you know who developed the software so you know the track record of the developers, etc. You can do anything with it to be sure it's not a liability. With closed software it's a pure crapshoot. The only thing you know is the track record of the company, but you don't know if they have new developers who suck, or if the marketing department demanded that a product be released next week etc.


-
Exposing vast conspiracies! Experts at everything even outside our expertise! Liberators of the world from the oppression of the evil USian Empire!

We are Kuro5hin!


Not quite (5.00 / 2) (#61)
by bugmaster on Wed Feb 13, 2002 at 01:15:45 AM EST

With open source, you are at the mercy of yourself. You can test the software yourself, you know who developed the software so you know the track record of the developers, etc.
In theory, this is true. However, in practice, the situation is a bit more complex. For example, there is no way I am ever going to be able to sift through the entire Linux codebase just to see if there are any potential security risks there. I don't have the time or the expertise to do that. And since Linux is open-source, it would be very difficult to know the track record of all the hundreds of developers who contributed to it.

I guess what I am saying is that just because the source code is theoretically available for review, does not automatically imply that everyone will review it. I think this is the assumption that many open-source advocates make implicitly, however.
>|<*:=
[ Parent ]

Review (3.00 / 1) (#66)
by salsaman on Wed Feb 13, 2002 at 08:49:27 AM EST

...just because the source code is theoretically available for review, does not automatically imply that everyone will review it...

True, but it also means that *someone* will review it, as opposed to closed source s/w, where *no one* will review it.

[ Parent ]

Hardly (5.00 / 1) (#72)
by ucblockhead on Wed Feb 13, 2002 at 03:10:00 PM EST

No, it means someone might have reviewed it. Or maybe not.

Whereas some companies producing closed-source software have code-reviews. So in that cause, someone might have reviewed it. Or mabye not.

Which makes the whole thing a wash, really.

It is interesting to note that BSD, which has a reputation for very good security, also incorporates a more explicit code review process than the Linux "Linus looks at stuff and it's all public so maybe someone else will, too" code review process.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

Open source is not secure (4.00 / 1) (#35)
by pauldamer on Tue Feb 12, 2002 at 02:06:11 PM EST

Its just that closed source software is inherently insecure.

Because closed source software is not transparent you can never know if it is secure or not until you have been comprimised. Also you have to wait for the publisher to distribute a fix before you are secure again.

Open source on the other hand is open to scrutiny. This doenst mean that it is more secure. It just means that you have a better chance of finding a security hole before you get cracked. Also because stake holders in the software are able to create and release fixes once a hole is found.


Hipotesis and experiment (none / 0) (#50)
by svampa on Tue Feb 12, 2002 at 07:10:12 PM EST

From a practical point of view, does the "better chance of finding a security hole before you get cracked" make any difference ? is the final issue less secure software?

That's the question



[ Parent ]
Panacea? (3.33 / 6) (#36)
by der on Tue Feb 12, 2002 at 02:24:35 PM EST

The original article tackled the common misconception amongst users of Open Source Software(OSS) that OSS is a panacea when it comes to creating secure software.

Uhm, what? The only people I've heard claim that open source is a 'panacea when it comes to creating secure software' are trolls on slashdot. There's a big difference between being a 'panacea' and being a "development model with some security advantages". You're trying to counter an argument that hardly exists, and sure as hell isn't a "common misconception".

I hate to sound nasty, but my BS-o-meter is tipping 10 on this one. This whole story reads like one big elaborate troll to me.



*shock* *horror* (2.25 / 4) (#37)
by quartz on Tue Feb 12, 2002 at 02:46:51 PM EST

What? An article on open source software security, written by a Microsoft employee -- a troll? Why, that's preposterous! You should be ashamed of yourself for throwing unwarranted accusations around like that.



--
Fuck 'em if they can't take a joke, and fuck 'em even if they can.
[ Parent ]
Aaaahhh (2.75 / 4) (#38)
by der on Tue Feb 12, 2002 at 03:03:54 PM EST

This I didn't know. My question is how does this crap get voted up? Apparently if a troll has lists in it with pretty little academic-looking references at the bottom it must be an intelligent article, and therefore needs to be voted +1FP.

On a side note, comparing vulnerabilities reported for various Linux distros and using said data to post arguments about 'Open Source security' screams "I don't know what the fuck I'm talking about."



[ Parent ]
The sad part (5.00 / 1) (#41)
by Miniluv on Tue Feb 12, 2002 at 03:21:59 PM EST

Aside from the numbers being thrown around as meaningful the article isn't bad. Granted it's obvious C4L isn't fully acquainted with all of the various testing methodologies, seeing as he barely touched on some and fully expounded on other, less useful ones. However, he raises a good point. Again and again and again. It must be fun, being the millionth person to point out, to the world, that Open Source doesn't cure cancer, get you laid, or anything else exceptional either.

Some things are holy, and the sauna is one of them
[ Parent ]
HAH. (none / 0) (#77)
by regeya on Wed Feb 13, 2002 at 10:02:35 PM EST

It must be fun, being the millionth person to point out, to the world, that Open Source doesn't cure cancer, get you laid, or anything else exceptional either.

It's also fun to use ESR as the strawman, for the millionth time.

I suppose the main attraction to open software, to me, is that if something does go wrong, and I have the ability to fix it (which I probably don't ;-) I can. Plus, the existence of Open and Free software, and the alleged superior response to risks has helped make closed-source vendors feel more of an obligation to a.) fix things, b.) fix them relatively quickly, and c.) not hide the problem. That's got to count for something.

Oops. I suppose I'm the millionth-and-one person to bring it up . . . then again, I've not heard anyone make quite that argument. Hrm.


[ yokelpunk | kuro5hin diary ]
[ Parent ]

Why I love source available (none / 0) (#80)
by Miniluv on Thu Feb 14, 2002 at 03:14:26 PM EST

I love software to which I can get the source because then I can fix it. Beyond that, I can also learn from it, both their good ideas and their bad ones.

Security problems are not the only ones which do, in fact, become shallower with source available software. Functional bugs tend to get fixed faster, poor design seems to get noticed faster and resolved more expediently, etc.

The root cause of this isn't entirely the fact that I can peruse the source, but it is certainly a contributing factor. I think though that "open source" or "free software" seems to have a higher quality quotient is partially because the people who write it care more about it. I don't think the people on the WinNT design team lost sleep wondering how their beloved OS would do. I do think Linus has lost sleep in the past puzzling out an answer to a vexing performance problem. If you've ever read ESRs emails on the linux kernel list you can tell he truly loves CML2 and wants to see people benefit from it.

I think that's the real turning point for free or open source software, people can really get attached to it because of the enabling nature of having the source, and getting to know it.

Some things are holy, and the sauna is one of them
[ Parent ]

LOL (5.00 / 1) (#47)
by Carnage4Life on Tue Feb 12, 2002 at 04:50:09 PM EST

This I didn't know. My question is how does this crap get voted up? Apparently if a troll has lists in it with pretty little academic-looking references at the bottom it must be an intelligent article, and therefore needs to be voted +1FP.

It's a troll because it is doesn't agree with your preconceived notions of what is right and what's wrong or because I work for a proprietary software company.

*chuckle*

You must be new here.

[ Parent ]
I was expecting this one. (1.50 / 2) (#51)
by der on Tue Feb 12, 2002 at 07:28:31 PM EST

No, it's a troll because of the quote I posted that the entire article is based on. You know, that completely false one?

If I sincerely thought your article had merit, I'd debate the points, not the quality of the article itself.



[ Parent ]
simple. (4.50 / 2) (#60)
by regeya on Wed Feb 13, 2002 at 01:11:59 AM EST

My question is how does this crap get voted up?

It gets voted up mainly because it gives a viewpoint that people think is contrary to whatever viewpoint the average Slashdot reader would have.

That's all.

I agree; I really does scream "I don't know what the fuck I'm talking about," which is sad because Carnage4life does know what the fuck he's talking about. Programming is his forte; sifting through statistics, apparently, is not one of his strongpoints.

Then again, I don't know what the fuck I'm talking about either. ;-)


[ yokelpunk | kuro5hin diary ]
[ Parent ]

Trolls (5.00 / 3) (#46)
by ucblockhead on Tue Feb 12, 2002 at 04:39:04 PM EST

A "troll" is someone who says something he doesn't believe in order to provoke a response.

A person who holds a minority opinion is not a troll.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

Exactly (3.00 / 1) (#49)
by gauze on Tue Feb 12, 2002 at 05:14:09 PM EST

yes correct. I think that's what the poster was saying. Maybe I'm assuming, but that's how I took it.

There's nothing wrong with a PC that a little UNIX won't cure.
[ Parent ]
You know.. (2.00 / 2) (#53)
by der on Tue Feb 12, 2002 at 07:41:49 PM EST

I thought that I could avoid the paragraph long disclaimer about how I was saying the article was a troll because it was full of misinformation, exxagerations, and bullshit, and not just because I disagree.

Obviously I was wrong, and I have to spell every stupid little thing out to avoid slashdot-style whinefests.



[ Parent ]
You'd still be wrong. (5.00 / 4) (#54)
by ucblockhead on Tue Feb 12, 2002 at 07:59:40 PM EST

A "troll" is not about misinformation, exageration and bullshit. It is about deliberately provoking a response.

I can tell you for a fact that Carnage4life is not a troll. He believes what he says. He is a positive contributor here. I can say with a high degree of confidence that he wrote this story because it is his opinion. It is what he actually believes. As such, it is not a "troll". It might be "FUD". It might even be "Flamebait". But it is not a "troll". If what he says is misinformation or bullshit, then please point out the misinformation or call him on the bullshit. Don't just throw your hands up in the air and shout out rude names.

This yelling of "troll" at everything not part of the party line is just an excuse not to think and not to actually deal with what is said. Sometimes it seems as if there are a lot of people that thing a "troll" is a post attacking open source.

So please go read the jargon file and learn what the damn slang means before throwing it around.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

Argggh (2.00 / 3) (#55)
by der on Tue Feb 12, 2002 at 08:31:15 PM EST

My first post DID point out the statements and ideas I thought were bullshit, then I started taking shit for screaming 'troll!'. I didn't just come out shooting, screaming 'troll' without any reasoning behind it. Whatever, I'm just in a fucking bad mood today, I don't need it getting me a bad rep on K5.

I apologize for screaming troll, and C4L I have nothing personally against you, but I still think this article is full of shit. :)

Methinks it's time for sleep.



[ Parent ]
specific reference (4.50 / 2) (#56)
by Pink Daisy on Tue Feb 12, 2002 at 10:01:10 PM EST

His specific reference to this view was Eric Raymond's article in NewsForge. While Mr. Raymond does produce lots of material that gets put to good use by slashdot trolls, he is hardly one himself. I'd also guess that this belief is held by many more people than just him; that is certainly the way he presents it.

[ Parent ]
Keep in mind . . . (4.00 / 1) (#59)
by regeya on Wed Feb 13, 2002 at 12:49:05 AM EST

kuro5hin is the official Anti-Slashdot and Anti-ESR/RMS site.


[ yokelpunk | kuro5hin diary ]
[ Parent ]

holier-than-thou (5.00 / 3) (#42)
by Pink Daisy on Tue Feb 12, 2002 at 04:05:32 PM EST

Hmm, I'm close to convinced as to the argument that open source software (and Linux in particular) is no better than closed competition (MS Windows 2000 in particular) in having security vulnerabilities. The numbers game is particularly impressive since Microsoft has a much higher user base and a lot more lines of code. It is an informative counterexample to Eric Raymond's, "security holes will be infrequent, the compromises they cause will be relatively minor." There is still the other side to the arguement, though: that open source software will have patches available on a much more timely basis. Security depends not only on what holes exist, but also on what the timeline for discovery, exploits and fixes is. Without evidence I can't really tell, however I'd guess that open source software does have an advantage there.

Of course, the main point of the article isn't to say which is more ore less secure! That's merely to point out that no one can afford to ignore the issues that lead to security holes. Experts from Theo of OpenBSD to operating systems researchers in academia almost all say that a focus on correctness is a very good way of preventing security holes, among other bugs.

That is my view as well. One person who has made an important contribution is Dawson Engler at Stanford. His work on metacompilation has found hundreds of bugs in Linux and OpenBSD. Having the source available is a big advantage for a researcher studying the problem, but the results are universally applicable.

More users works both ways (none / 0) (#57)
by wurp on Tue Feb 12, 2002 at 10:30:02 PM EST

For open source products, I would expect a wider user base to radically _increase_ the security for the product's users. A significant number of the users will have high security requirements and the resources to ensure that they meet those requirements. So, everyone who uses the software can benefit from each and every security conscious user who patches holes.
---
Buy my stuff
[ Parent ]
Someone has misunderstood something. (none / 0) (#64)
by arcade on Wed Feb 13, 2002 at 03:37:30 AM EST

I'm close to convinced as to the argument that open source software (and Linux in particular) is no better than closed competition (MS Windows 2000 in particular) in having security vulnerabilities

Of course there are vulnerabilities in opensource software products. The point is that the vulnerabilities are there for everybody to see. All those vulnerabilities listed at securityfocus is listed because someone *has* reviewed the code and actually read it, and FOUND vulnerabilities.

If you did not find any vulnerabilities in opensource products, then obviously, none had taken the time to review the products.

That you find so many bugs in _CLOSED_ source software, now THAT is worrying. You can only start imagining how many nasty undiscovered bugs are inside those products.

--
arcade
[ Parent ]

See beyond mere trees (5.00 / 3) (#58)
by tmoertel on Tue Feb 12, 2002 at 11:46:36 PM EST

Let's cut back to the big picture. Pick any desirable characteristic of software -- resource efficiency, robustness, quality, and, yes, even security -- and guess what? The process by which the software was created largely determines how much of that characteristic the software exhibits. Good work, good code. Crappy work, crappy code. Not exactly a news flash.

Now -- and here's the important part -- take any software, developed by any process, and then consider any desirable characteristic. Do you get more of that characteristic by letting everybody see the source or by keeping it hidden away?

That's the argument for open source.

See the forest.

--
My blog | LectroTest

[ Disagree? Reply. ]


money.. (none / 0) (#65)
by kimpton on Wed Feb 13, 2002 at 07:35:13 AM EST

If you have closed source software you're probably trying to make a profit out of it. If you're making money you can probably afford to pay good developers to write good code *and* pay good money to people to make sure you get more of the desirable characteristics.

Open sourcing software doesn't guarantee you'll get more of the desirable software characteristics, because you can't guarantee everybody will review all the code.

[ Parent ]
This would be nice... (none / 0) (#78)
by AngelKnight on Wed Feb 13, 2002 at 10:20:10 PM EST

...But when was the last time you went to an interview for a development position, and:

  • The interviewer directly asked you, point blank, how many security flaws were discovered to be partially or wholely your fault in design or implementation in your last three jobs?
  • (If hypothetically asked) How many times did you actually know the answer to this?
  • (If, hypothetically, you answered) How many times was your answer verified?
It would be nice if security could be purchased in this way. But my personal belief is that it cannot.

[ Parent ]
Disciplined Process helps-so does Risk Analysis (none / 0) (#62)
by Randall Burns on Wed Feb 13, 2002 at 01:30:36 AM EST

The article made a good point that disciplined software development processes can help with security issues(this is something the SEI folks have been talking about for years. Now, even if having a sound software development process in place is the most important single factor in improving security, this doesn't mean that Open Source products don't have a significant advantage. It can be argued that the Open Source movement forced Microsoft and other commercial software vendors to adopt more disciplined software development processes to catch up in the area of security.

Ultimately, security issues are a form of project risk. The SEI folks have a significant literature on project risk analysis. One major technique for doing risk analysis are delphi studies. The insurance industry has used betting pools for assessing risk for years(Lloyd's of London consists largely of a network of betting pools). Ideosphere has used reputational betting pools on some major computer and security related issues.

I would like to see some careful creation of claims in this security area-and some more careful analysis that looks how different systems are being used.

RJB

Analysis is sorely lacking (5.00 / 3) (#67)
by Secret Coward on Wed Feb 13, 2002 at 08:54:37 AM EST

Comparing number of discovered security holes says nothing about security. Many of the holes were undoubtedly found because the software was open source. If it had been closed source, the numbers would have been smaller, but the security holes would still exist. These numbers could just as easily reinforce ESR's claims that open source allows white-hatters to discover and fix security holes, and that Solaris has an order of magnitude more holes than discovered.

The analysis needs an operational definition for 'security'. The article seems to use "number of bugs posted on bug-traq". As others have pointed out, this is not a good definition. Other definitions may include:

  • number of security holes exploited
  • cost of correcting security holes
  • risk of being exploited based on popularity of your platform and vulnerabilities discovered.
  • How long the code was in the wild before a patch was made available.

For any statistical analysis to be conclusive, you must have random assignment and you must make your prediction before you conduct the study. If you conduct the study before you make your prediction, you become folly to the Texas Sharpshooter Phenomenon.

What is the Texas Sharpshooter Phenomenon? Suppose a gunslinger shoots holes in the side of a barn. You then divide the side of the barn up into a grid. One grid square will probably have more holes than the other squares. Does that mean this square is inherently more likely to be shot by a lunatic gunslinger? No! If you were to start over with a solid barn wall, you may find a different square ends up with the most holes. If you count the number of vulnerabilities in eight flavors of UNIX, some will have more vulnerabilities than others. If you count vulnerabilities the next year, you may find strikingly different numbers.

To put this another way, statistics measure the likelihood that something took place for a reason, rather than by chance. If I roll four pair of dice, each pair a different color, and the blue die roll the highest number possible, can I claim the blue die are best? No, I just selected the die that randomly rolled outside the norm. Likewise, if all the die rolled similar numbers, it is still possible that some of the die are biased.

If someone wants to compare the security of closed and open source software, here is my recommendation. Take 40 software developers. Randomly divide them into ten groups. Have five groups develop five simple closed source applications. Have the other five groups develop the same five simple applications and place their projects on sourceforge. Have the developers maintain these systems for one year. After one year, have experts find security holes in the ten projects. With recent funding proposals, someone may actually find the money to do this :-0

Best practices are the way (none / 0) (#69)
by Secret Coward on Wed Feb 13, 2002 at 08:56:55 AM EST

While formal proofs may force developers to be more careful in their coding, it does not prove the software secure. Someone could prove that fib(n) returns the Fibinocci sequence up to integer N, but for all we know, the function may also download a trojan from blackhat-trojan.com, or read 'n' as a string with scanf(). Formal methods may be the way to reliable software, but they are not the way to secure software.

From the list, the best approach to software security is a combination of best practices, design reviews, and code audits. Of course, before we develop best practices, we first need to determine where security holes are coming from. A best practice for buffer-overruns prevents half of our security holes. A best practice to document and distribute a project's design would prevent some more holes. A good guide on developing secure systems would also help.

Formal proofs (5.00 / 1) (#70)
by epepke on Wed Feb 13, 2002 at 02:25:59 PM EST

There are a lot of ways that formal proofs can be useful in security. Many problems in security reduce to graph theory, and so all sorts of graph algorithms can be useful. Another related set of problems have to do with keeping track of contaminated versus uncontaminated data, and these can make use of formal proofs as well.

Code audits and design reviews are fine, but the only serious way of building security into a system is to design it in from day 0. Have a strong idea of your security requirements at the beginning of the project (it's probably too optimistic to hope for a security document.) Just due to this discussion, I've decided to practice what I preach. I'm starting a Scheme implementation, and I'm putting a contaminated bit right into the S-expressions.

I find myself disagreeing both with this debunking and the original Cathedral and the Bazaar ideas. Most security problems exist because people don't think about security. However, given that this is true, and you're stuck with post facto kludges, then it is better to have more eyes. It's a bit like organic gardening; once you get used to using manure, you become interested in its quality.


The truth may be out there, but lies are inside your head.--Terry Pratchett


[ Parent ]
security is dead (4.00 / 1) (#71)
by QuantumG on Wed Feb 13, 2002 at 02:29:19 PM EST

Ever since the antisec movement took off software security has been dead. The vast majority of people who look for exploits dont disclose them anymore, they trade them or they hide them. How does this affect open source software? Well apparently it is easier to find exploits in open source software, however there are a lot more to find in close source software. Also, if you are out looking for something to trade you are more likely to go for something closed source cause it is worth more (and is less likely to have already been discovered) due to the level of skilled needed being higher. Making more secure software is the only way to improve software security today, and having the source publically available results in neglegible benefit.

Gun fire is the sound of freedom.
The math of risk (4.50 / 2) (#73)
by fajoli on Wed Feb 13, 2002 at 03:21:56 PM EST

I am not an expert on risk analysis, but I will give my opinion anyway.

It would seem that vulnerabilities are only part of the equation in evaluating risk. Using this alone as the basis of a decision or conclusion would appear going off a bit half-cocked.

In simple terms, risk is the potential to lose something of value. In pseudo math, I think it would go something like this:

risk = value_of_data / (vulnerabilities * difficulty_of_discovery * life_of_vulnerability * ease_of_exploitation)

If the window of vulnerability is closed quickly after discovery, risk drops (open source). If the vulnerabilties are hard to find, risk drops (closed source). Given that the number of vulnerabilities remaining in any software is unknown (closed or open), it would seem that open or closed source is only a matter of philosopy.

In the closed source case, one has to have confidence in the superiority of their software provider's technical ability over the black hat. In the open source case, one has to have the resources to close windows quickly (no pun intended).

Number of vulnerabilities hardly make a case either way since there is no way accurately compare between two systems of varying content and application. We might as well talk about the relative merits of green corduroy trousers and scuba suits.


Open Source Security (4.00 / 1) (#79)
by mewse on Thu Feb 14, 2002 at 04:59:58 AM EST

A simple question for folks here. Which is more secure; The wuftp FTP server which I disabled when a security hole was announced, or the IIS web server which I left running because the company refused to announce the security hole until several months later?

There's a whole lot more to 'security' than just source code.

mewse



OpenBSD (none / 0) (#81)
by oddis42 on Fri Feb 15, 2002 at 04:58:31 PM EST

I really thought OpenBSD had a better track record than this. Guess I'm blinded by their website :)
Norwegian? Visit Linews
Problems (none / 0) (#82)
by Ashcrow on Fri Feb 15, 2002 at 05:14:53 PM EST

Correct me if I am wrong (and I know that if I am wrong I will sure hear about it ;-)) but isn't Code Audits, Design Reviews and Testing just about the same as the lots of eyes idea except with a lot less people reviewing it?

Also, the numbers are not very fair. If you think about it Red Hat or Mandrake bundels a large amount of programs with their distrobution. A lot more than you would get with AIX or Solaris. If there is anything that I learned in dealing and working in security it is every program, no matter how well programmed and protected, can still be used to gain access. There is always a way.




----------
"Are you slow? The alleged lie that you might have heard me saying, allegedly moments ago? That's a parasite that lives in my neck."
Respect for Dare O., but maybe misunderstanding. (none / 0) (#83)
by Futurepower on Sun Feb 17, 2002 at 04:26:38 AM EST


I have respect for Dare Obasanjo, who wrote the article.

However, I think there is a misunderstanding. The most important comparison is not between errors in Open Source and closed source software. The most important comparison is between the neighborliness and friendliness and love inherent in the Open Source/GNU system of making contributions, and the aggression coming from some closed source companies.

Defending yourself against that aggression is a large part of the price of using closed source software. Closed source companies often leave bugs in software so they will have something to fix in a new version, for which they will charge. They often build in features that are good for them, but not for the customer. For example, Microsoft's Internet Explorer tries to push users toward Microsoft's commercial MSN site.

It is possible that secret agencies of the U.S. government find ways to build deliberate security bugs into closed source software. Some very well-funded elements of the U.S. government are absolutely opposed to privacy. There are a few links to information about this in my article about corruption in the U.S. government: What should be the Response to Violence?

The errors in software are not the biggest security risk. The biggest risk is deliberate aggression, or errors combined with deliberate aggression.


The Myth of Open Source Security Revisited v2.0 | 83 comments (79 topical, 4 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest © 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!