Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
The Myth of Open Source Security Revisited

By Carnage4Life in Op-Ed
Sun Oct 28, 2001 at 12:56:41 PM EST
Tags: Software (all tags)
Software

It is a common misconception amongst users of Open Source software that Open Source software is a panacea when it comes to creating secure software. Although this belief is rarely grounded in fact it has become a cliche that is used axiomatically by Open Source enthusiasts and pundits whenever discussions on security.

The purpose of this article is to expose the fallacy of this kind thinking and instead point to truer means of ensuring the quality of the security of a piece software is high.


Blind Faith: With Many Eyeballs, All Bugs Are Shallow

In his seminal writing The Cathedral and the Bazaar, Eric Raymond used the statement "Given enough eyeballs, all bugs are shallow" to describe the belief that given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone. Over time the meaning of the original quote has been lost and instead replaced with the dogmatic belief that Open Source is the panacea that solves the problems involving security in software development.

A Critical Perspective: Eyes That Look Do Not Always See

An article entitled The Myth of Open Source Security by John Viega, the original author of GNU Mailman, challenges the popular premise that Open Source and secure software go hand in hand. In the article John Viega acknowledges the fact that with lots of people scrutinizing a program's source code, bugs and security problems are more likely to be found. He then raises the point that the availability of source code does not automatically guarantee that the code has been reviewed by competent parties for a variety of reasons. Secondly people who are looking at the source code with the intent of modifying it are not necessarily in the state of mind the perform a comprehensive security audit of the code.

One deterrent to the mass review of certain Open Source projects is a high level of complexity in the code, which can be compounded by a lack of documentation. In such a scenario, it is unlikely that the average user of the software will be able to perform a good review of the code. Another reason that prevents good review of Open Source code is that most of the people only look at the parts of the code that they want to modify which may only be a small section of the code. This behavior leads to various "hotspots" in the code that are intensely reviewed because they are the most open to modification while many other sections of the code that are less likely to be useful during modifications are barely looked at. Finally he dwelled on the fact that a majority of software developers are ignorant of security practices beyond rudimentary knowledge of good practices (e.g. avoid strcpy, gets, and strcat functions in C or using encryption is good). Unfortunately security issues are more complex than most developers are aware of leading those with the best intentions to miss subtle security bugs or unknowingly introduce them into a system after a modification. Finally, the fact that some security bugs are unobvious unless one is completely familiar with several parts of the source tree and even then certain bugs may only occur when a particular sequuence of operations occurs is a reason to be wary of the claims that source availability guarantees the security of an application.

The article then goes on to use security flaws in GNU Mailman, the Open Source implementation of Kerberos and wu-ftpd as examples of how security bugs in Open Source software can be undiscovered for significant amounts of time even though the source code is available and supposedly has been peer reviewed by many eyeballs. The fact that Open Source software is beginning to be packaged as finished products more and more it is likely that the complacency of users of Open Source software will increase since people may begin to assume that the code has been peer reviewed by their vendor of choice and will thus fail to audit the code.

Seeing The Light

In a recent article on Newsforge, Eric Raymond lambasts Microsoft for the comments of one of its employees, Scott Culp, who suggests that the security community should show restraint in releasing information about vulnerabilities and exploits. ESR then goes on to tout the lack of compromises on Open Source systems and lists the reasons for the weakness in Microsoft's software to be due poor design and a lack of an independent peer review. What is of note is that the there is nothing specific to the Open Source model of software development that guarantees that a system will be well designed or that it will be reviewed by competent people willing to spend time to discover security flaws who have the prerequisite background to know what they are looking for.

Besides good design and peer review I would like to add verifying the software via formal proofs using rigorous Mathematical methods, strict development practices and security audits to the list of effective methods to be used when attempting to build a secure software system. None of the methodologies suggested is innate to the Open Source or proprietary development model although a system that uses a Bazaar model in combination with either model should fare best.

In conclusion, I'd like to share two lessons I've learned from various software engineer tomes
  1. Testing does not show the absence of bugs.

  2. Testing cannot be used to improve the quality of software but can be used to demonstrate the quality of the software.
The driving force behind Open Source software is the constant cycle of debugging and testing by its users. Unfortunately these by themselves do not improve the overall quality of a system but are merely indicators especially with regards to security. On the other hand, building security in the system via a security oriented initial design framework, security audits and development practices that eschew dangerous programming habits are surefire industry-tested methods of improving the overall quality of the security of a system.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Poll
What is the best way to ensure that a piece of software is secure?
o Open Source it 14%
o Use formal mathematical methods to prove its correctness 18%
o Security Audits 16%
o Use strict software development practices 16%
o Prevent release of exploit code when vulnerabilities found 0%
o Peer review (none Open Source) 1%
o Some of the above 31%
o Other (describe below) 1%

Votes: 83
Results | Other Polls

Related Links
o The Cathedral and the Bazaar
o The Myth of Open Source Security
o GNU Mailman
o recent article on Newsforge
o Also by Carnage4Life


Display: Sort:
The Myth of Open Source Security Revisited | 97 comments (74 topical, 23 editorial, 0 hidden)
Summary of article: (3.93 / 16) (#2)
by Dlugar on Fri Oct 26, 2001 at 09:09:04 PM EST

"Open Source software can be more secure than their closed-source counterparts, because more eyeballs are helpful, but this doesn't mean that it will happen to every project posted on Freshmeat."

Um, yeah, great. I really don't hear many people other than flaming zealots and /. trolls saying that Open Source magically fixes security problems. I don't think either of those two groups frequent Kuro5hin, anyway. I don't mean to be rude, but I honestly don't see how this is anything that people don't already know.

Dlugar

Thanks for not reading the article (3.14 / 7) (#3)
by Carnage4Life on Fri Oct 26, 2001 at 09:19:03 PM EST

"Open Source software can be more secure than their closed-source counterparts, because more eyeballs are helpful, but this doesn't mean that it will happen to every project posted on Freshmeat."

The article doesn't mention minor projects on Freshmeat but instead popular and widely used Open Source software.

Um, yeah, great. I really don't hear many people other than flaming zealots and /. trolls saying that Open Source magically fixes security problems. I don't think either of those two groups frequent Kuro5hin, anyway.

Many software developers have begun to believe the mantra that "Given Enough Eyes All bugs Become Shallow" is some sort of law of nature and even worse that it somehow is related to how secure a piece of software can be. Interestingly this mindset is becoming beginning to become popular even amongst members of the security community.

[ Parent ]
No problem. I do it all the time. (4.63 / 11) (#6)
by Dlugar on Fri Oct 26, 2001 at 09:59:45 PM EST

The article doesn't mention minor projects on Freshmeat but instead popular and widely used Open Source software.
Pardon my hyperbole/strawman attack. I'll try again:

In the introduction, you use some pretty strong words, seeming to imply that "Open Source software is automatically secure" has become a mantra among Open Source enthusiasts, in fact to the point that discussions on security almost always contain this mantra and nothing else.

I have never seen such things. Even in ESR's article, which you quote, all he says to that effect is that in the Open Source world, "security holes will be infrequent, the compromises they cause will be relatively minor, and fixes will be rapidly developed and deployed." This is a far cry from your warning that "Open Source security is a myth," or that the "open source == automatically secure" phrase is an axiom taken for granted.

In your second paragraph, you say, "the dogmatic belief that Open Source is the panacea that solves the problems involving security". I see no such dogmatic beliefs--rather real-world expectations that software requires vigorous testing to be secure, but that the easily-patchable nature of Open Source software turns widely-deployed software into [more] secure software more rapidly than widely-deployed closed-source software.

The Viega section is more even-handed, saying simply that "with lots of people scrutinizing a program's source code, bugs and security problems are more likely to be found [but] the availability of source code does not automatically guarantee that the code has been reviewed by competent parties". I enjoyed those paragraphs much more than the preceeding ones, but I still had problems with this statement towards the end:
The fact that Open Source software is beginning to be packaged as finished products more and more it is likely that the complacency of users of Open Source software will increase since people may begin to assume that the code has been peer reviewed by their vendor of choice and will thus fail to audit the code.
I personally see absolutely no evidence of this. The end user in most desktop and small-server markets doesn't care much at all about security, as Microsoft has demonstrated for us all. They're more interested in usability. Those who do care about security will be testing their Solaris and NT boxes just as hard as they will be their Linux boxes and other machines running Open Source software.

And then down to ESR, and your point seems to be: "there is nothing specific to the Open Source model" in ESR's words. But did ESR ever say there was? No, all he said was that "its [IIS and Windows] source code has never been subjected to independent peer review." And that, I believe, is true. Is that specific to the Open Source model, or the Bazaar model? No. But it is apparent that Microsoft is suffering [at least from a security standpoint] from the lack of it.

Then you leave us with two gems of wisdom that are quite true, but (as far as I can tell) irrelevant to the topic at hand. What does "software testing" have to do with the Open Source model more so than the closed-source one? And then, although the two numbered statements were true, you jump off the metaphorical deep end and say that debugging and testing don't improve the overall quality of the system! It may not be the only way or even the best way, but the testing and debugging cycle most certainly does improve the quality of a package.


So basically, to sum up, we have a rant against a handful of Open Source Zealots with a few meaningful sentences about how good design and security audits are necessary for well-designed, secure code.

Dlugar

[ Parent ]
I say tom-may-to, you say to-mah-to (3.00 / 5) (#7)
by Carnage4Life on Fri Oct 26, 2001 at 10:19:29 PM EST

I have never seen such things. Even in ESR's article, which you quote, all he says to that effect is that in the Open Source world, "security holes will be infrequent, the compromises they cause will be relatively minor, and fixes will be rapidly developed and deployed." This is a far cry from your warning that "Open Source security is a myth," or that the "open source == automatically secure" phrase is an axiom taken for granted.

Saying "in the Open Source world security holes will be infrequent" is the same as saying "open source == automatically more secure". What else could it mean? This is besides the fact that this is not exactly accurate when one considers the frequency and nature of bugs found in a number of Open Source projects like sendmail, rpc.* daemons, BIND, wu-ftpd and others.

In your second paragraph, you say, "the dogmatic belief that Open Source is the panacea that solves the problems involving security". I see no such dogmatic beliefs--rather real-world expectations that software requires vigorous testing to be secure, but that the easily-patchable nature of Open Source software turns widely-deployed software into [more] secure software more rapidly than widely-deployed closed-source software.

And that's the point, easily patchable != more secure. That is the entire point of Paul Viega's article and mine.

And then down to ESR, and your point seems to be: "there is nothing specific to the Open Source model" in ESR's words. But did ESR ever say there was? No, all he said was that "its [IIS and Windows] source code has never been subjected to independent peer review." And that, I believe, is true. Is that specific to the Open Source model, or the Bazaar model? No. But it is apparent that Microsoft is suffering [at least from a security standpoint] from the lack of it.

You bring up the second most popular Open Source fallacy here, that peer review == Open Source. Independent peer review can occur without having the source available to all and sundry that receive a copy of the binary. More importantly you've mixed up the design of IIS and Apache with whether their licensing models. The number of exploits in IIS versus Apache has less to do with availability of the source and more to do with how they were designed.

Then you leave us with two gems of wisdom that are quite true, but (as far as I can tell) irrelevant to the topic at hand. What does "software testing" have to do with the Open Source model more so than the closed-source one?

According to ESR in the Cathedral and the Bazaar, the main benefit of Open Source is getting an army of testers and co-developers to make all bugs shallow. This is why I brought up testing.

And then, although the two numbered statements were true, you jump off the metaphorical deep end and say that debugging and testing don't improve the overall quality of the system! It may not be the only way or even the best way, but the testing and debugging cycle most certainly does improve the quality of a package.

The overall quality of a piece of software is dependent on how well it is designed. Testing can reveal bugs which can be fixed but how effectively and easily these bugs can be fixed without repurcussions on the rest of the system is dependent on the quality of the system.

All the testing in the world couldn't change the fact that the original Netscape navigator code was a shitty codebase and thus could not improve its quality. This eventually lead to the rewrite which became the current Mozilla codebase.

[ Parent ]
Potatoe (3.66 / 3) (#10)
by Dlugar on Fri Oct 26, 2001 at 11:19:34 PM EST

Saying "in the Open Source world security holes will be infrequent" is the same as saying "open source == automatically more secure".
Whoa! One word can make a world of difference, buddy. Please notice your addition of the word more. You will have a much more difficult time making the argument that "open source != automatically more secure" than you would "open source != automatically secure". For example, would you agree that if there were a particular software package that had two separate versions, both undergoing the exact same design, testing, security audits, etc.--but one was open source and the other was not, do you think that the open source version would stand a better chance of being secure? That is what, to me, "in the Open Source world security holes will be infrequent" means.

This is besides the fact that this is not exactly accurate when one considers the frequency and nature of bugs found in a number of Open Source projects like sendmail, rpc.* daemons, BIND, wu-ftpd and others.
Compared to what, exactly? What closed-source projects that do the same things have less frequent and less serious bugs? And are as easily patched?

And that's the point, easily patchable != more secure. That is the entire point of Paul Viega's article and mine.
No, Viega's article is "Open source doesn't help any if nobody looks at the code." Your article is more "Good design from the ground up and rigorous standards and auditing help more than opening the source." At least, in the few sentences I found meaningful. The rest seemed to be using this meaningful portion as justification for a rant against open source.

You bring up the second most popular Open Source fallacy here, that peer review == Open Source.
What on earth? I specifically did not! I pointed out that ESR did not! I said, very specifically, "Is that specific to the Open Source model, or the Bazaar model? No."

According to ESR in the Cathedral and the Bazaar, the main benefit of Open Source is getting an army of testers and co-developers to make all bugs shallow.
This particular statement is not the "main benefit," but rather one of a dozen and a half or so. Furthermore, the "formal" declaration of this is:
Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.
This has nothing to do with either "Testing does not show the absence of bugs." nor "Testing cannot be used to improve the quality of software but can be used to demonstrate the quality of the software."

The problems with this demonstrated by Mailman were that the security issues never became "a problem" in the eyes of developers. If a system were compromised or some such, I'm certain that a patch would be immediately produced and, if the problems were deep enough, people would likely stop using the broken code and perhaps even rewrite a well-designed version from the ground up. A shoddily-written closed-source application of the same type would not likely receive the same treatment.

The overall quality of a piece of software is dependent on how well it is designed.
And in your own example, we see Netscape being redesigned as Mozilla. People were able to see the code, saw that it was pretty lousy, and so they decided to rewrite it.

What about that makes the statement, "Open Source software can be more secure than closed-source counterparts" false?

Dlugar

[ Parent ]
In the Open Source world... (3.33 / 3) (#11)
by Carnage4Life on Fri Oct 26, 2001 at 11:35:41 PM EST

Whoa! One word can make a world of difference, buddy. Please notice your addition of the word more. You will have a much more difficult time making the argument that "open source != automatically more secure" than you would "open source != automatically secure". For example, would you agree that if there were a particular software package that had two separate versions, both undergoing the exact same design, testing, security audits, etc.--but one was open source and the other was not, do you think that the open source version would stand a better chance of being secure? That is what, to me, "in the Open Source world security holes will be infrequent" means.

Ok, fine let's take out the word "more" which I only put in there to make your original statement sound more reasonable. The phrase "in the Open Source world security holes will be infrequent" means" implies that the Open Source model leads to secure software. The point of my article and that of Viega is that this assumption is false.

Security audits, code review done by competent developers, enforcement of good programming practices, mathematical proofs of software correctness and a well thought out design lead to secure software. Opening the Source, like testing helps along the road to quality software but is not a primary means of getting there.

[ Parent ]
Reasonable (4.50 / 2) (#14)
by Dlugar on Fri Oct 26, 2001 at 11:51:09 PM EST

Security audits, code review done by competent developers, enforcement of good programming practices, mathematical proofs of software correctness and a well thought out design lead to secure software. Opening the Source, like testing helps along the road to quality software but is not a primary means of getting there.
Simpler question: do you think that an open source environment is more likely, less likely, or about the same to have "security audits, code review done by competent developers, enforcement of good programming practices, mathematical proofs of software correctness and a well thought out design"?

To prove your conclusion that the Open Source model does not lead to more secure software, you will have to prove the "less likely" scenario, not this strange and tangentially related concept of "Opening the source doesn't help if no one looks at the code."

Dlugar

p.s. on a very quite unrelated note, I meant to ask you: What exactly do you mean by "verifying the software via formal proofs"? Do any software companies actually do this for software of any reasonable length? Do you have any examples that show this makes code more secure? I can only imagine formal proofs that prove the code does what it's supposed to--not anything that would prove it is free from buffer overruns or some such.

[ Parent ]
RE: Reasonable (3.00 / 2) (#16)
by Carnage4Life on Sat Oct 27, 2001 at 12:15:00 AM EST

Simpler question: do you think that an open source environment is more likely, less likely, or about the same to have "security audits, code review done by competent developers, enforcement of good programming practices, mathematical proofs of software correctness and a well thought out design"?

No I don't. I'm not saying it's impossible for these to occur in an Open Source project but there is more likelihood that at random company X everybody gets a list of methods they shouldn't use, company Y and department Z perform quarterly code reviews, and a consultant ε is called in for a security audit than it is for an Open Source project to have such formal procedures.

p.s. on a very quite unrelated note, I meant to ask you: What exactly do you mean by "verifying the software via formal proofs"? Do any software companies actually do this for software of any reasonable length? Do you have any examples that show this makes code more secure?

Algorithms in a system can be proven for correctness using mathematical methods which happens all the time in crypto or trusted systems or at NASA. You are correct however in that, the actual implementation cannot be proved correct since you can't prove that the person writing the code didn't screw up somewhere implementing the algorithm. At best you can attempt to verify that the implementation is correct.

[ Parent ]
Have you considered? (3.50 / 6) (#12)
by stuartf on Fri Oct 26, 2001 at 11:37:37 PM EST

No, all he said was that "its [IIS and Windows] source code has never been subjected to independent peer review." And that, I believe, is true.

Have you considered that it might just be possible that the IIS code has been subjected to peer review? Who knows what sort of code reviews Microsoft do? The fact that they haven't let ESR review it, or open sourced it, does not automatically mean that it hasn't been peer reviewed.

[ Parent ]

Independent being the Key Word (4.50 / 2) (#18)
by Dlugar on Sat Oct 27, 2001 at 12:22:04 AM EST

I don't know of any IIS code that Microsoft has released to outside parties. I know they do that with portions of the Windows source code. I imagine if they did it with IIS, the outside world [notably the Gartner group?] would probably know about it in some way or another. Got any links?

Dlugar

[ Parent ]
Your mistake (3.66 / 3) (#50)
by stuartf on Sun Oct 28, 2001 at 02:39:06 AM EST

Your mistake is assuming that because you don't know of it happening, it hasn't happened. There's nothing to say that it has or hasn't happened. Nada. Until you know for sure, it's just unfounded speculation.

[ Parent ]
In the absence of evidence... (4.66 / 3) (#53)
by Macrobat on Sun Oct 28, 2001 at 02:21:22 PM EST

...it's safest to assume the worst. Microsoft tells us they're secure (despite numerous exploits) but we don't really know that. Trusting their word is just as much "unfounded speculation" unless they can prove it.

"Hardly used" will not fetch a better price for your brain.
[ Parent ]

You missed a word (4.66 / 3) (#20)
by Ian Clelland on Sat Oct 27, 2001 at 12:32:02 AM EST

I'm sure that Microsoft does code review all the time. Structured code review, even. Every software company has to do that.

The point was that, especially in the area of security matters, independent peer review is almost a requirement. The source needs to be scrutinised explicitly for security holes, by an outsider, with no financial or other interest in the software itself.

It's just too easy for an internal code review to make mistakes, either honestly ("Oh, that code's good, I know the programmer who did that") or not-so-honestly ("If that gets out, the product won't ship for another year; besides, it's just a small hole...")

[ Parent ]

Testing (3.50 / 2) (#32)
by zephiros on Sat Oct 27, 2001 at 05:46:52 AM EST

And then, although the two numbered statements were true, you jump off the metaphorical deep end and say that debugging and testing don't improve the overall quality of the system! It may not be the only way or even the best way, but the testing and debugging cycle most certainly does improve the quality of a package.

A few things here. First, usage != testing. It would take just a staggering amount of usage to randomly generate a MS Index Server buffer overflow. For mature software, security holes tend to show up in rarely-used edge cases, not daily tasks.

Second, no amount of testing will improve quality in a software system. Implementing an effecting testing and defect correction process, however, just might. If this process is broken, it's quite possible to introduce more software defects. Fixes in complex systems run a risk of unanticipated side effects, especially if the fixes are developed in parallel, and integrated into the system without proper regression testing. This risk becomes more pronounced as the software becomes more complex and the number of developers/testers grows. There's a cottage industry surrounding formalized risk management strategies for very large scale projects.

Bottom line: IMO, there's no magic to open source (or closed source, for that matter). The OSS development model simply replaces one flavor of defects with another.
 
Kuro5hin is full of mostly freaks and hostile lunatics - KTB
[ Parent ]

Usage as testing (4.00 / 2) (#59)
by kmself on Sun Oct 28, 2001 at 05:24:58 PM EST

Usage is testing -- for a mode of testing.

What else is beta testing? What you're getting with a user-based testing/debug cycle is a nonsystematic, but probabalistic, exploration of the possible error domain. I've found that even showing a bit of code (or an essay or documentation) to a few other people starts showing all sorts of things I'd have not found on my own. In deployment, particularly for network-accessible applications, usage testing grows to include exploit attempts on deployed systems, a pretty broad class of methods.

Usage testing isn't a complete substitute for systematic testing. There are several modes of testing, including:

  1. Desk checking.
  2. Walkthroughs.
  3. User testing.
  4. Regression testing.
  5. Software audits.

Free software tends to incorporate the first three, and sometimes four, methods. Software audits are less commmon, but there's a mix of factors: while there's less proprietary incentive and funding for audits, there's also more access to auditing capabilities from independent organizations.

The reason usage testing is useful is that it can both feed regression tests ("the plural of anecdote is data"), and it fits the security hole profile: the window of vulnerability is that period of time between the development of an exploit for a vulnerability, and the time at which the vulnerability has been broadly patched. Existence of vulnerabilities is not itself indicative of a security problem. As the great bulk of exploit attempts are script-driven (it's easier to deploy a tool than to be creative and invent one), the effective protection afforded is pretty strong.

--
Karsten M. Self
SCO -- backgrounder on Caldera/SCO vs IBM
Support the EFF!!
There is no K5 cabal.
[ Parent ]

Re: Usage as testing (4.00 / 3) (#63)
by zephiros on Sun Oct 28, 2001 at 07:02:09 PM EST

Usage is testing -- for a mode of testing.

Some usage is testing, however not all usage is inherently "testing." To wit:

1. Bob uses my software.
2. Bob has a copy of the source code.
It does not follow that...
3. Bob is a beta tester, whose input improves the quality of the software.
In order to have an effective testing process, you need to incorporate data from usage. However, the fact that usage is occurring somewhere does not inherently mean that the quality of the software is improving. Which I believe was C4L's point; many eyes are just many eyes. You need to have some sort of structure in place in order to turn that potential into improved software.

Free software tends to incorporate the first three, and sometimes four, methods. Software audits are less commmon, but there's a mix of factors: while there's less proprietary incentive and funding for audits, there's also more access to auditing capabilities from independent organizations.

I think OpenBSD is an amazing example of competent, thorough source auditing emerging out of the stew of OSS. However, I think we're all aware that a default installation of Joe-Bob's home-brew Gnutella client is probably not as secure as a default installation of OpenBSD. Ultimately, I suspect that source licensing may not be a meaningful metric for determining how secure a piece of software is.
 
Kuro5hin is full of mostly freaks and hostile lunatics - KTB
[ Parent ]

Probability, Feedback (5.00 / 1) (#70)
by kmself on Sun Oct 28, 2001 at 10:09:47 PM EST

You're engaged in some selective reading there.

More users means, probabalistically, you're exploring more execution paths, input combinations, and operating conditions. Clearly, there's also a feedback process required. Absent this, your user base doesn't do much for you. However, in all likelyhood, as your user base expands, and given communications channels (bugtracking systems, discussion boards, mailing lists), you're going to get feedback. Law of large numbers.

I don't disagree regarding code audits. Note though that what OpenBSD accomplishes, for the most part, is identifying practices (mostly buffer overflow condiditions) which may lead to exploits. My read of the oBSD audit process is that it focusses less on identifying potential exploits in code, than on initiating conditions. The former is very difficult, the latter, only reasonably difficult.

Read what I wrote. I think you'll find you're posing a strawman. Again: users are an element of software review. Not the whole hog.

--
Karsten M. Self
SCO -- backgrounder on Caldera/SCO vs IBM
Support the EFF!!
There is no K5 cabal.
[ Parent ]

While you do have a point (4.20 / 10) (#4)
by Zeram on Fri Oct 26, 2001 at 09:50:16 PM EST

I think you also miss part of the point. For example the client I work for bought several Cisco firewalls about a year ago, and just finally got to putting them in place about a month ago as part of a network expansion plan. Well guess what, the firmware of the firewall has a very deadly bug that can render it almost useless, but since the client didn't buy a support contact from Cisco, they couldn't get the update until they shelled out some outrageous sum for said contact. Now if the client was using retired boxen with (insert your favorite open source OS here) on them and they found out about an exploit, they could get a fix from someone who already noticed and pacthed it, or authorized some overtime and had some of their people look for the hole and patch it themselves.

I do not for a second disagree with you, that more eyes means little in the grand scheme of things. However there many instances (and to give credit whereit's due, I can't think of a one that involes M$ at the moment) where closed source software vendors will send you patches only if you have a support contract with them. That kind of highway robbery is the best bussiness case I can think of for the strategic deployment of open source. I love Linux. Plain as that, but I also know that it has it's time and place. It is not the one-size-fits-all product that many of the zealots make it out to be. However, most of the more mature open source projects are very secure, much more secure upon a base install than their closed souce rivals. The perfect example of this is the IIS vs Apache debate. If as an admin you keep up on IIS patches, and you are careful about what you load, IIS can be pretty secure. However Apache, on a default install has not had a total system access level exploit in over three years. What scares me is that M$ intends to rewrite large protions of IIS for their next version, in the name of increased security. Given their track record so far you can only wonder what kind of exploits it wil lead to. In the end though, security has little to do with specific software, and everything to do with good policies, through and alert admins, and useing the right tools for the right job.
<----^---->
Like Anime? In the Philly metro area? Welcome to the machine...
Missing the point? (2.66 / 3) (#5)
by Carnage4Life on Fri Oct 26, 2001 at 09:56:51 PM EST

Now if the client was using retired boxen with (insert your favorite open source OS here) on them and they found out about an exploit, they could get a fix from someone who already noticed and pacthed it, or authorized some overtime and had some of their people look for the hole and patch it themselves.

This really is tangential to the point of my article. The point of my article is that the factors that matter in building secure software have little to do with whether the source is available on some FTP site or on the /src directory of the CD. I do agree that it is practically impossible for a vendor to prevent you from patching your system if it is Open Source although I can think of a number of ways they can make it difficult.

[ Parent ]
you'll have to excuse me... (4.00 / 2) (#33)
by Zeram on Sat Oct 27, 2001 at 07:56:06 AM EST

I wrote that comment late last night just before I went to bed, so it was more rambling than I had whished. The point I was trying to make is that with open source software, if a problem is detected, than all that is required to fix it is a few developers. By hiring developers the chances of getting a fix are pretty good, and as long as you hire decent developers it should be rather timely. Where as waiting for a third party software vendor, could have you waiting for a good long time, or paying as much if not more money than you would have to pay for developers. As I said though, security has more to do with being alert and timely, than with any particular type of software.
<----^---->
Like Anime? In the Philly metro area? Welcome to the machine...
[ Parent ]
something I dont get... (4.00 / 3) (#13)
by rebelcool on Fri Oct 26, 2001 at 11:45:20 PM EST

the often touted "if its got a bug, i can fix it!" thing comes up. Then I think about how most software that does anything useful is extremely complicated and tracking down bugs can be difficult, even for a programmer intimate with the software. Much less Joe Programmer with a pile of a million lines of code.

COG. Build your own community. Free, easy, powerful. Demo site
[ Parent ]

Fixing a bug is usually pretty easy... (4.00 / 2) (#26)
by Whyaduck on Sat Oct 27, 2001 at 02:26:01 AM EST

...if you can reproduce it. The hard part is not introducing 10 new bugs with your fix. Do any open source development teams release automated regression tests with their software? If they do, how well maintained are they. Please note, I am in no way shape or form suggesting that closed source projects all have well developed test suites...believe me, I know they don't. I haven't been working with open source, so I'm just not familiar with the "typical" development process.


Oh Lydia, oh Lydia, say, have you met Lydia?
Lydia The Tattooed Lady.
She has eyes that folks adore so,
and a torso even more so.

[ Parent ]
depends on the nature of the bug. (4.00 / 3) (#38)
by rebelcool on Sat Oct 27, 2001 at 11:44:07 AM EST

sure a typo might be fairly easy to fix (but again, you need to know where to look..something Joe Q. Programmer isnt going to know off hand in a massive piece of software) but when complicated bugs involving several layers of hardware and software come up... that can be nearly impossible for the seasoned developer to track, much less someone who doesnt know the layout of the code.

COG. Build your own community. Free, easy, powerful. Demo site
[ Parent ]

Congratulations! (3.60 / 5) (#54)
by stuartf on Sun Oct 28, 2001 at 02:23:08 PM EST

You've spotted one of the many errors in the Open Source philosophy. Yes, you can fix bugs if you have the source, but it will usually be not worth your time to do so, as you'll have to become familiar with millions of lines of code. And reading someone elses code is hard work...

[ Parent ]
"Security" is not a reality, it's an abs (4.46 / 13) (#15)
by quartz on Fri Oct 26, 2001 at 11:53:57 PM EST

I don't believe "Given enough eyeballs, all bugs are shallow" is a mantra for any serious developer. I think it's just a statement of fact.

Let me borrow an argument from the arsenal of MS apologists and tell you about Joe User, i.e. myself. As a regular OSS user who has done his homework on network security, I run Apache on Linux and I've never had a security problem with either of them. Why didn't I have security problems? Because 1) the software, in its current stage of development, is pretty secure; 2) it's not completely secure, however, but whenever someone does find a security hole, a patch is released within 24 hours and I can patch my system before script kiddies have a chance to try out the exploit; and 3) If by any chance I happen to stumble upon a security hole, or I become the victim of someone who did, or maybe the hole is in some obscure piece of software that's less likely to be patched as quickly as Apache or Linux, I have the source code right there in /usr/local/src. I'm fairly proficient in C and C++, if I care enough about that piece of software, I'm going to spend a day or two and write the patch myself.

Meanwhile, my log files are continuously bombarded with junk generated by Windows computers infected with Nimda and Code Red. Does the standard "Windows has viruses because it's more popular" line apply here? No, because Nimda is not a Windows issue, it's an IIS issue, and IIS is far less popular than Apache. There's another cliche that does apply here, and that is: "It's not IIS's fault, since patches had been available for quite some time - it's the user's fault".

My conclusion? A program is as secure as it's creators AND it's users want it to be. Therefore the most secure software is that software which is developed by security conscious programmers AND used by educated, computer literate, security conscious users. The real problem wrt security is not OSS versus closed source; it's OSS developers and users versus closed source software developers and users, with all their reasons, their philosophies and their motivations. You do the math.



--
Fuck 'em if they can't take a joke, and fuck 'em even if they can.
Nit. (4.25 / 4) (#28)
by pwhysall on Sat Oct 27, 2001 at 03:16:02 AM EST

Nimda *is* a Windows issue, as well as being an IIS issue. It uses filesharing to propagate on the local network, if it can.
--
Peter
K5 Editors
I'm going to wager that the story keeps getting dumped because it is a steaming pile of badly formatted fool-meme.
CheeseBurgerBrown
[ Parent ]
Huh? (3.66 / 3) (#51)
by Eimi on Sun Oct 28, 2001 at 01:38:46 PM EST

How is that a Windows issue? Are you saying that Linux is unable to share files? Now, if you said it was a Windows issue because under Linux the webservers run as "nobody", and therefore don't have rights to overright crucial files, you might have a point, but filesharing seems neither here nor there.

[ Parent ]
Hrm. (3.50 / 2) (#64)
by pwhysall on Sun Oct 28, 2001 at 07:18:46 PM EST

Linux does not routinely share its root filesystem.

It is not possible to create hidden shared directories without system permissions.

Both of these facts are true exclusively with respect to Windows.

I was merely pointing out that Nimda is not just a web server virus, but uses multiple vectors of infection that are unique to Windows - the creation of hidden, non-system shares being one of them.
--
Peter
K5 Editors
I'm going to wager that the story keeps getting dumped because it is a steaming pile of badly formatted fool-meme.
CheeseBurgerBrown
[ Parent ]

Bah. I will preview in future :) (4.00 / 1) (#65)
by pwhysall on Sun Oct 28, 2001 at 07:20:52 PM EST

On reading that, it's apparent that it's clear as mud.

My point was that while Windows routinely shares the root filesystem and allows hidden shares to be created without system permissions, Linux does not.

Apologies for any confusion caused.
--
Peter
K5 Editors
I'm going to wager that the story keeps getting dumped because it is a steaming pile of badly formatted fool-meme.
CheeseBurgerBrown
[ Parent ]

Many eyes make shallow bugs?? (none / 0) (#72)
by treefrog on Mon Oct 29, 2001 at 04:58:03 AM EST

I'm not actually sure that many eyes make shallow bugs. In fact I think there is a serious flaw in this argument.

Having the right eyes review code makes for shallow code. Not many eyes. The open source movement seems to run on the principle that the more people who look at the code the better, under the assumption that there is more chance that one of these people will be the right person. I'm not at all convinced that this is the case.

A good developer is worth 10 poor developers, and this doesn't stop with open source. Get the right people looking at the code, and mentoring others to work in the same way (possibly by pointing out their checking and testing methodology on mailing lists), and you start to stand a chance. But if you think that making code open source is a panacea for a lack of a good designa and testing methodology, then you are doomed from day one...


Best regards

Treefrog
Twin fin swallowtail fish. You don't see many of those these days - rare as gold dust Customs officer to Treefrog
[ Parent ]
Links that didn't make it into the article (4.33 / 9) (#25)
by Carnage4Life on Sat Oct 27, 2001 at 01:41:01 AM EST

  1. Building Secure Software by Eugene Spafford, read the slide on page 13 and compare the number of OS exploits for proprietary versus Open Source operating systems to reach the startling conclusion that the there is little correlation between the availability of the source and security of the platform.

  2. Why Open Source Software Only Seems Secure by Eugene Spafford presented at the DK SSLUG. This tackles a lot of the claims of Open Source being better at creating secure software and than proprietary software and provides some insight as to why people think so


Uhmm (3.83 / 6) (#31)
by Neuromancer on Sat Oct 27, 2001 at 04:51:43 AM EST

No offense, but borrowing and paraphrasing from my software engineering notes.

1) Testing improves software...
and
2) Testing improves software...

You test, you find crash conditions, you debug. This is very methodical in nature. Good software testing will tell you excactly what's wrong, and should indicate a course of action for its repair.

IE, I run a battery of scripts on a function to blackbox test it. I then find that this function crashes on such and such input. I return this to whoever wrote the function, or debug myself if I have the source, and suddenly, it works better.

As for peer review, this is also important. Many people will incorporate functions or libraries into programs, and someone else will say, OOOHHH, Don't do that, such and such causes an overflow condition in this piece of software, change it out with something that cannot do this.

Both of these processes are pretty important... Testing ALONE doesn't make better software, but it is not merely a hallmark of the quality of software, it is the only way to ensure quality.

Testing (3.00 / 2) (#52)
by RHSwan on Sun Oct 28, 2001 at 01:40:07 PM EST

The problem with testing is you can only test for potential problems you know about. You can't test for potential problems you don't know about. Most security problems probably slip by for any combination of the following reasons. 1) Why would anybody do that? 2) I didn't think anybody could do that. 3) I didn't think of that. 4) We didn't change that piece of code, so why test it? (for the programming impaired, new code plus old code can equal security hole) Open source, because more people CAN see the source, will potentially catch more of these errors. But that is no guarentee people will look at them. Most people, even dedicated ones, will only test what they have created, and assume the other people have done their job. Closed source, done properly, will have people who's job it is to test the code, preferrably people with no emotional tie to the code. They won't catch everything but their job is to try.

[ Parent ]
Not so (4.00 / 1) (#67)
by Neuromancer on Sun Oct 28, 2001 at 08:45:43 PM EST

Blackbox testing of functions should test ALL possible combinations of inputs. This is done by scripts. A human may miss a possible combination. A script will not.

[ Parent ]
Back in the real world... (3.00 / 1) (#76)
by Shpongle Spore on Mon Oct 29, 2001 at 11:55:12 AM EST

What about the many, many functions that have a virtually infinite set of inputs? You can't just methodically test them all; you have to have choose some set of inputs that you think will expose all the potential defects in the code.

To take a real world example, I work on a C compiler for a living. The only way to test it is to run programs though it and see if the compiled programs run correctly. Obviously there's no hope of ever running every possible C program though the compiler to test it.
__
I wish I was in Austin, at the Chili Parlor bar,
drinking 'Mad Dog' margaritas and not caring where you are
[ Parent ]

test pieces.. (3.66 / 3) (#78)
by rebelcool on Mon Oct 29, 2001 at 12:18:02 PM EST

of course, the best way is to test individual pieces to the best of your ability, thus the sum of them all is correct.

COG. Build your own community. Free, easy, powerful. Demo site
[ Parent ]

Not an option. (4.00 / 2) (#82)
by Shpongle Spore on Mon Oct 29, 2001 at 03:41:58 PM EST

Saying "the sum of them all" is very misleading; you can't just take all the bits of a complex piece of software are throw them together like you would add up a column of numbers. If you take a major piece of software and locate all the parts simple enough to be tested in isolation, you would no doubt find there there is far more code devoted to joining those parts together than there in the parts themselves. There is no way to test the relationships between the parts without testing the system as a whole.

How safe would you feel flying in an airplane knowing that all the nuts, bolts, tubes, wires and microchips had been rigorously tested but the finished product had not?
__
I wish I was in Austin, at the Chili Parlor bar,
drinking 'Mad Dog' margaritas and not caring where you are
[ Parent ]

well, duh. (4.00 / 2) (#83)
by rebelcool on Mon Oct 29, 2001 at 03:59:33 PM EST

who said anything about not testing the whole thing? However, you can be reasonable sure if the planes wings, avionics, and landing gear (and all the rest of its pieces work), that the plane will work.

COG. Build your own community. Free, easy, powerful. Demo site
[ Parent ]

Oh, right (none / 0) (#71)
by Neuromancer on Mon Oct 29, 2001 at 01:12:54 AM EST

Yeah, what I kinda meant was that I assume that open source programmers tend to take the same precautions.

[ Parent ]
testing is not necessary (3.00 / 3) (#81)
by Wouter Coene on Mon Oct 29, 2001 at 03:11:52 PM EST

Just look at the code:
  • look at function input (parameters, stdin, sockets): is each possible case handled?
  • look at function output (parameters, stdout, sockets): is it correct with regard to the specification?
  • is all this documented?
  • do all function calls in the function follow the interface for the respective functions?
  • is the code readable, so future problems can be easily spotted?
I dare to say that applying these guidelines will catch up to 80% of most obscure bugs. A good programmer can just spot (potential) problems almost instinctively, and this includes security problems.

Overzealous belief in the testing of your software will lead to program code that passes your tests, and only your tests.

Oh, and following the Ten Commandments for C Programmers often helps too.

[ Parent ]

Dipping my foot into the pool (3.80 / 5) (#37)
by forgotten gentleman on Sat Oct 27, 2001 at 11:23:38 AM EST

I find this an interesting thread. There are some points that stand out.

  • Testing = quality?
    I like Carnage4Life's point that testing does not result in quality; it only finds flaws. That I think has a subtle point, which is that quality comes from design and engineering, not from things you do after the fact.

    But what does testing mean? What if a language doesn't have preconditions and postconditions? Then can testcases provide them?

  • Heart of the matter
    I think the question, phrased in a very rigorous way, is what engineering advantages do open and closed development have?

    One point for open source is that closed management often has a single mandate with what the software should accomplish. If they want fast development times, they will push for that. However, with open development, there is a likelyhood that at least one person will find it in their interest to push for security.

    On the flip side, closed companies often act as filters, and can select the right mix of people.

  • Design and engineering
    Security does not have to be immediate. Everything has defects, unless like NASA we are willing to commit very deeply. But if a group releases software with a good enough design that security fixes are quick, then is that not secure software?

    This point relates to the other two points I just mentioned. That is why I sympathize with Carnage4Life's position. Testing has no effect on quality and only indicates it, because you may find 100 bugs a day, or just one.

    I do not think this is an open and shut case though, because unit testing may lead to "design by contract," which is about preconditions and postconditions. Also, one heart of the matter is that open developers have proven far quicker and more grateful at closing security holes that hackers point out, than closed companies have been.

  • Responding to one minor point... (4.00 / 1) (#56)
    by magney on Sun Oct 28, 2001 at 03:33:37 PM EST

    But if a group releases software with a good enough design that security fixes are quick, then is that not secure software?
    Not if the users of the software don't promptly apply the security fixes. The patches that protected against Code Red and Nimda were months old.

    Mind you, I'm not saying that Microsoft is especially quick about releasing security fixes. They aren't, from past history. But in this case it wouldn't have mattered how quick they'd been. As those cases show, quick release of security patches is only one piece of a very large security puzzle.

    Do I look like I speak for my employer?
    [ Parent ]

    Testing & quality: Not equal, but joined @ the (4.00 / 1) (#68)
    by Mr.Surly on Sun Oct 28, 2001 at 08:47:24 PM EST

    Truly, testing is not equal to quality, and a lot of quality comes from design and engineering.

    However, testing certainly implies quality, because after all: Why test if not for the sake of fixing flaws found? Just to make a list of bugs that you have no intention of fixing? I would assert that testing and quality go so closely hand-in-hand that the "subtle" difference might as well be ignored. Two scenarios:

    1) Would you use software in a mission-critical application that had excellent design and engineering, but no testing?

    2) Flip side: Would you use software in a mission critical application that had rather sloppy original design and programming, but had been tested / fixed / patched so much that it is considered stable? (side note: Apache server software was named as a joke: "A PAtCHy server")

    I, personally, wouldn't want to use either one. Clearly scenario number one is out of the question. Allowing for my own bias, I'd say that number two resulted in Windows being what it is today ;^)

    [ Parent ]
    Hey Dare, (3.37 / 24) (#45)
    by Eloquence on Sat Oct 27, 2001 at 05:41:58 PM EST

    .. is that how you pay off your Microsoft scholarship? You're doing a pretty good job so far. But in science and, to some degree, journalism, there's something that's called "full disclosure", and I think many people would appreciate it if you disclosed your job and scholarship relation to Microsoft in articles which relate to Microsoft / Open Source. Right below the article. Thanks.

    Regarding the article itself, this is exactly what people refer to when they use the phrase "Fear, Uncertainty, Doubt" (FUD). Instead of making any clear point that Open Source is inferior, you create a strawman by misinterpreting ESR (changing "given enough eyeballs.." to "given many eyeballs" in your argument) and promptly knock it down. Wow, great work. Open source does not automatically a secure program make! Who would have ever known?

    Now, the real purpose of the article is of course to create emotional dissonance with regard to the open-source development process, and emotional ambivalence with regard to proprietary software. The article does not function on the basis of rational argument, but rather focuses on targetting people's emotional belief that open source software is superior. And the facts? You have shown nothing, despite your reference to "rigorous mathematical methods". You have not even made any claim worth refuting. Your defense of Culp does not even merit response. To quote your user info, "thanks for playing" anyway. I hope you can sleep well at night.
    --
    Copyright law is bad: infoAnarchy Pleasure is good: Origins of Violence
    spread the word!

    Full Disclosure? (3.83 / 6) (#46)
    by Carnage4Life on Sat Oct 27, 2001 at 06:06:05 PM EST

    But in science and, to some degree, journalism, there's something that's called "full disclosure", and I think many people would appreciate it if you disclosed your job and scholarship relation to Microsoft in articles which relate to Microsoft / Open Source. Right below the article. Thanks.

    What job? I'm currently in school and I worked for MSFT over the summer. I fail to see how working at Microsoft affects my opinions on Open Source which are the same as before I went there.

    Anyway I've always been upfront about my Microsoft affiliations, heck I've posted multiple diaries about my MSFT internship. While I worked there I posted the fact that I was an employee in my user info here and on Slashdot, and even had my MSFT email on my posts on Slashdot until someone at work pointed out that this would give the impression that I was making official comments instead of merely stating my own opinions.

    If I really was trying to hide that I worked for MSFT, why would I make a link to my site available? Even better why would I use this account which is probably known by people on K5, Slashdot, my school and at MSFT?

    Your paranoia amuses me.

    Instead of making any clear point that Open Source is inferior

    Wow, I'm beginning to grow tired of saying this but "The article isn't about Open Source being inferior to anything". Here, go read Kasreyn's post and try reading the article objectively instead of searching for attacks to the Open Source world view which do not exist.

    [ Parent ]
    Amusement (3.75 / 4) (#55)
    by PresJPolk on Sun Oct 28, 2001 at 03:29:55 PM EST

    Your paranoia amuses me.

    Your arrogance, assuming that everyone will know who you are and have read all your diaries, amuses me.

    Also, they way you seem to take every opportunity to bash free software, in the same ways others seem obsessed with bashing Microsoft, is good for a chuckle here and there.



    [ Parent ]
    K5 is a community (4.33 / 3) (#58)
    by Carnage4Life on Sun Oct 28, 2001 at 05:00:08 PM EST

    Your arrogance, assuming that everyone will know who you a re and have read all your diaries, amuses me. </i.<br>
    K5 is a community and most of us read each others diaries and know about the goings on in the lives of other K5ers. I know Phage has recentlly given up a long lost love , that spiralx has been laid off, that flowergrrl is having a baby for codemonkey_uk, marlowe thinks he's a super patriot and rusty just got married just from reading the diaries.

    It isn't arrogance that makes me think that people would have read my diary but the fact that K5 is a community wehere people tend to know things about each other based on their writings [and clicking the homepage links on their comments].
    <br. <i>Also, they way you seem to take every opportunity to bash free software, in the same ways others seem obsessed with bashing Microsoft, is good for a chuckle here and there.


    I don't bash free software, I sometimes critique some of the notions of its more fanatical adhjerents but that's just the way I am. I did the same thing at Microsoft while I was there (critiquing practices I disagreed with).

    [ Parent ]
    Community (4.50 / 6) (#62)
    by PresJPolk on Sun Oct 28, 2001 at 06:18:21 PM EST

    Ever taken a look at the number of people listed as online at any time? A community of that many faceless entities might be hard for one to keep straight.

    Also, while I wouldn't expect you to go out of your way to mention your MS ties in random comments on this site, I do expect such disclaimers for submissions. While I don't see writings for this site as journalism, I do think writers here have some responsibility to their readers around here.

    I don't think your article is part of some "astroturf" campaign. I doubt there was even anything dishonest about it. Then again, I remembered your MS ties when I saw your nick in the article summary. Someone who didn't recall your mentions of MS in the past, maybe someone new to the site, who then finds them out later, might have a different reaction.

    [ Parent ]
    Extensive mirth (3.60 / 5) (#79)
    by Anonymous 242 on Mon Oct 29, 2001 at 01:17:14 PM EST

    So what you are saying is that in a forum that requires one click to get to Carnegie's user info page and a second click to get to his diaries, that he must still detail every possible conflict of interest?

    Puh-lease.

    Carnegie isn't expecting everyone to know him personally, but it isn't like he's hinding anything or that information regarding possible conflicts of interest isn't easily available.

    Next thing you know if I write an article about the existence of God and fail to mention that I'm an Orthodox Christian, I'll be trying to pull the wool over the eyes of all those free thinker types.

    Whatever,

    Lee Irenæus Malatesta

    [ Parent ]

    That's ridiculous (4.00 / 4) (#69)
    by rebelcool on Sun Oct 28, 2001 at 10:08:34 PM EST

    So should I put a disclaimer on any of my posts that say "I work for a company that does not make open source software"?

    Lets make anyone who dabbles in open source put disclaimers on any posts which bash microsoft.

    Then again, Carnage IS on the k5 blacklist. To the stake!

    COG. Build your own community. Free, easy, powerful. Demo site
    [ Parent ]

    Proof of security? (3.50 / 2) (#57)
    by Otter on Sun Oct 28, 2001 at 04:03:49 PM EST

    Besides good design and peer review I would like to add; verifying the software via formal proofs using rigorous Mathematical methods...

    Could someone knowledgable explain how this works, preferably on a level comprehensible to someone who got A's in freshman calculus but has forgotten all of it? Are such practices in actual use? In the making of any software I might have used or interacted with?

    analysis of algorithms... (4.50 / 4) (#61)
    by rebelcool on Sun Oct 28, 2001 at 06:07:39 PM EST

    Dont think calculus proofs, think Logic proofs (though they are similar, calculus is merely a subset of logic).

    And no, its rather difficult to explain. I spent a semester learning logic proofs in general, and I have yet another semester or two to go in applying them to computer science.

    Im sure plenty of software you use has proven algorithms... sorting algorithms are a good example.

    They're most often used in the real-time industry for medical devices, aeronautics and what not when lives are on the line and an algorithm must absolutely, positively work all of the time.

    COG. Build your own community. Free, easy, powerful. Demo site
    [ Parent ]

    software validation (5.00 / 1) (#73)
    by jsilence on Mon Oct 29, 2001 at 10:07:38 AM EST

    Formal software validation can be done (and is being done) with specific mathematical methods.

    One approach is to model your software as a statechart (one of the UML diagram types), convert it to a binary decision diagram, formulate a statement that shall be proven in temporal logic and have a validation program validate that statement against the BDD.

    David Harel did fundamental research and 'invented' the statecharts some decades ago. For converting your software to a statechart it needs to be suited to be expressed in states. Digital watches for instance are good candidates, but also the TCP/IP stack can be modeled like this.

    Logically this statechart can then be translated into an occurence tree. This tree has one node for every possible overall state that the complete system can have. Naturally these trees grow exponentially with the amount of possible states in the statechart. To aleviate the calculation and memory problem the occurence trees can be expressed as binay descision diagrams. This is a lossless compression where redundant nodes are eliminated. Of course you don't convert to and from the whole occurence tree since that's the beast you want to evade.

    Temporal logic makes statements about boolean variables over a forking time tree. Think of this like in the one episode of voyager where this captain from the future manipulates timelines to rescue his wife. At every descision point a new timeline spawns off. It is possible to formulate statements like: There is at least one timeline where x is always true. Or: There is no timeline where y is always false. Or: In every timeline z will eventually become true.

    A research group in Oldenburg/Germany has developed a software validation system which allows them to validate these kind of statements against a statechart (converted to BDD and then to some other representation).

    As a demonstration they have modeled a tamagotchi and have validated that there is at least one way to keep a tamagotchi alive. They have also proven that it 'dies' for sure when you don't press any button.

    'Tamagotchis?' you say? 'Toys?'

    Well the group has also been working together with BMW and they used the system to validate the central locking software for cars. As far as I remember one of the other industry partners is British Airways.

    The interresting point about statecharts is that there are tools that can generate code from the model, for instance VHDL code that can be burned onto chips. This way you can have validated code in your airbag and other safety and security critical systems.

    As a chemical engineer I worked in a research group that tried to model chemical plants as statecharts with the goal to make a half automated security check. As far as I know this is an ongoing effort.

    -rolf


    Quantum mechanics: The dreams stuff is made of.
    [ Parent ]
    Anything is possible, given enough time (5.00 / 1) (#80)
    by Salamander on Mon Oct 29, 2001 at 01:21:34 PM EST

    One approach is to model your software as a statechart (one of the UML diagram types), convert it to a binary decision diagram, formulate a statement that shall be proven in temporal logic and have a validation program validate that statement against the BDD.

    You make it sound so easy, but in fact there are a few significant obstacles:

    • Tools to create BDDs aren't exactly ubiquitous, and none of them work with the languages people actually implement in. As a result, you end up with two independent expressions of your algorithm or protocol - one in the validator's preferred language/notation and one in your actual implementation language. In other words, you have to write your code twice, and the two versions have to be exactly the same despite being expressed in different languages, becase if there's the slightest difference then a validation of one is meaningless with regard to the other.
    • Expressing an adequate set of truth conditions in temporal logic is a very difficult and specialized exercise. For many people it will be more difficult than writing the code was.
    • BDDs grow exponentially. So does the time required to walk a BDD and check truth conditions. Even fairly simple programs become impossible to validate within reasonable space and time limits.

    This doesn't mean that one should abandon the whole idea of program validation. Rather, what people should do is satisfy themselves with something less than full-blown formal validation of entire programs but more than seat-of-the-pants code inspection. Verifying critical parts of programs with a tool such as SPIN or Murphi> is OK. Using a tool like MC to automate checks for specific common causes of error is OK too. Using tools like these will allow you to produce much more robust code than you could do otherwise, while still allowing you to ship your code this century.



    [ Parent ]
    validation is not suitable for every software (5.00 / 2) (#85)
    by jsilence on Tue Oct 30, 2001 at 10:35:26 AM EST

    * Tools to create BDDs aren't exactly ubiquitous, and none of them work with the languages people actually implement in. As a result, you end up with two independent expressions of your algorithm or protocol

    I wrote that your software has to be suitable for being expressed by statecharts and later I wrote that there are tools that can generate the code, thus you have one authorative source, the model. This is the 'silver bullet' Harel wrote about when CASE tools where hip in the late eighties.

    In other words, you have to write your code twice

    That would invalidate your formal validation no matter whether you have exactly the same structure in your code.

    I agree with your other points. Formal software validation can not become mainstream at this point. Therefore I guess it will only be done for important areas, like power plant software, safety critical embedded systems or very fundamental software like the TCP/IP stack.

    -rolf


    Quantum mechanics: The dreams stuff is made of.
    [ Parent ]

    Automatic code generation (5.00 / 1) (#91)
    by Salamander on Tue Oct 30, 2001 at 07:58:55 PM EST

    I wrote that your software has to be suitable for being expressed by statecharts and later I wrote that there are tools that can generate the code, thus you have one authorative source, the model. This is the 'silver bullet' Harel wrote about when CASE tools where hip in the late eighties.

    How much of the software that is being written today do you think can reasonably be expressed in this manner and then automatically converted to code? I mean that question sincerely, BTW. In my experience, such tools tend to fail in one of several ways:

    • The internal state model is limited, requiring that intuitively defined algorithms and data structures be decomposed and generally "dumbed down" to the tool's level.
    • The input language/method is so cumbersome that it drives programmers nuts.
    • The tool is incapable of generating code that's directly usable in a real-world environment (usually because it has embedded dependencies on libraries that would need to be ported).
    • The generated code is mega-repetitive crap that is too big or too slow to use for production.

    Just because I've never seen something doesn't mean it doesn't exist, though. Do you have any suggestions of where I should look if I want to find state-of-the-art tools in this area?



    [ Parent ]
    Big subject (5.00 / 1) (#75)
    by epepke on Mon Oct 29, 2001 at 11:49:02 AM EST

    This is really the sort of thing that a good four-year computer science program is good for.

    There are several kinds of proofs: correctness proof, termination proofs, time and space proofs. You need a termination proof of some sort, but fortunately, with most modern languages, these are easy. (Aside: The Entscheidungsproblem proves that there exist programs the termination of which is undecidable, but with any luck we don't write those.)

    The basic idea of a correctness proof is like those induction proofs we're all familiar with from high school. You know, the annoying ones where you prove it's true for k = 0 and then it's true for k + 1. What you have to do is find some boolean function of all the variables of a program. You choose it so that, at the start of the program, it's uniquely determined by the given information and, at the end of the program, it's equivalent to the desired result. Then you go through and see at each step of the program whether it remains true. If it does, you've proved the program.

    Obviously, some functions may be temporarily false. You really only have to check at branch points and at the end, but it's more convenient to group statements so that you can prove it at more places.

    The trick is to find this function. It seems obvious that the function will be one of the following:

    1. Really trivial, almost or entirely equivalent to the program itself.
    2. Really ridiculous and arcane.

    That is, in practice, what happens. One of the neat things about correctness proving is that after you do it for a while, you begin to write programs better in the first place, just to save yourself some work. I seldom go through full program proving nowadays, but I do usually put in comments that are enough to reconstruct a proof without much effort.


    The truth may be out there, but lies are inside your head.--Terry Pratchett


    [ Parent ]
    True but disingenuous (4.73 / 15) (#60)
    by epepke on Sun Oct 28, 2001 at 05:30:34 PM EST

    Speaking strictly, the points in this article are true. However, at best the article is disingenuous.

    Whether Open Source is a panacea is not the relevant question. The relevant question is how Open Source stands with respect to other existing methods of software development, specifically the "Trust Us" corporate waterfall method, with it's strategic use of prerelease announcements, cost/benefit analysis of releasing buggy software, and dependence upon upgrades that fix bugs for a revenue stream.

    In his seminal writing The Cathedral and the Bazaar, Eric Raymond used the statement "Given enough eyeballs, all bugs are shallow" to describe the belief that given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone. Over time the meaning of the original quote has been lost and instead replaced with the dogmatic belief that Open Source is the panacea that solves the problems involving security in software development.

    Eric Raymond is a polemicist. He's a good polemicist; I like him. Polemics are necessary to counter other corporate polemics for the simple reason that most people who make decisions are not intelligent enough to understand basic logic, reasoning, and mathematics. This may be lamentable, but it is true, and it is in this environment that we exist.

    What is of note is that the there is nothing specific to the Open Source model of software development that guarantees that a system will be well designed or that it will be reviewed by competent people willing to spend time to discover security flaws who have the prerequisite background to know what they are looking for.

    This is true. However, it is not the point. There are elements specific to the corporate waterfall method as currently practiced which do guarantee that a system will not be reviewed by such competent people, or at least that such reviews will be substantially ignored due to cost analysis. Open Source lacks this guarantee. Of course, that is not a guarantee that it will be reviewed by competent people, but there will at least be the possibility.

    Besides good design and peer review I would like to add; verifying the software via formal proofs using rigorous Mathematical methods, strict development practices and security audits to the list of effective methods to be used when attempting to build a secure software system.

    I, personally, am a big fan of rigorous algorithm proving. However, that is because I am forty years old. I was very proud of being a part of a development team in 1983 which developed a fairly complex communications package with no known bugs. One bug was discovered in the field, and we fixed it. But that was two decades ago, and it was in academia, and such things do not fly now.

    Instead, the dominant IT mentality seems to be the following: Hyulk, hyulk. All programs have bugs. Get with the times, you luser! At the same time, the corporate mentality is the following: ROI! ROI! Make the stockholders happy! Leverage the buyout! Leverage the buyout! Let's play golf! Where's my power tie?

    As a result, not only woud most people in the software business not only would not know an algorithm proof if it bit them on their private parts, they have actual, effective contempt for those who do.

    I like Open Source projects because I can prove algorithms, do predicate calculus analysis, etc. without some sphinctroid suit breathing down my neck and asking me why I'm not making them money. You may mock Open Source because it doesn't 100% guarantee this, but the big advantage is that, unlike proprietary software, it doesn't 100% prevent it. Some is better than none, and it is a fool who claims it isn't.


    The truth may be out there, but lies are inside your head.--Terry Pratchett


    Agree Completely -Thanks - Nice Answer (4.00 / 2) (#66)
    by mami on Sun Oct 28, 2001 at 07:21:51 PM EST

    I like Open Source projects because I can prove algorithms ... You may mock Open Source because it doesn't 100% guarantee this, but the big advantage is that, unlike proprietary software, it doesn't 100% prevent it.

    So nice to read at times some real nice answers here. Never seen that formulated so short and to the point. The only reason why I'll stick to Open Source out of principle and no matter what.

    [ Parent ]

    Software companies still use waterfall? (4.33 / 3) (#84)
    by PhoenixSEC on Mon Oct 29, 2001 at 05:09:47 PM EST

    Perhaps I'm missing something here, but there is an important piece of information missing about the waterfall methodology... It does not work.

    People know that it does not work.

    You seem to have the same contempt for proprietary software you accuse the poster of having for open source. Working for a company does not equate with bad software.

    A project being open or closed source does not guarantee good development practices, but neither one prevents it either. They are, in fact, seperate issues. If my company were to post our source code for download and invite people to submit changes, it would be 'open source.' But since we would still follow the same process we use in-house today, the result would be efficient and stable software that does what it is supposed to; just like our proprietary software.

    On a side note, if the company you are (or were) working for does not allow you to develop software correctly, perhaps you should allow them the pleasure of finding a replacement for you =).

    [ Parent ]
    Not *that* waterfall (5.00 / 1) (#86)
    by tmoertel on Tue Oct 30, 2001 at 11:39:31 AM EST

    The method he referred to was the "Trust Us" corporate waterfall method -- a method of running a business, not a software-development method. He means that most software companies give money the top priority. All other matters, including software quality and security robustness, are secondary concerns and not allowed to get in the way of chasing the truly big dollars.

    That's the corporate waterfall method.

    --
    My blog | LectroTest

    [ Disagree? Reply. ]


    [ Parent ]
    Thank you (5.00 / 2) (#89)
    by epepke on Tue Oct 30, 2001 at 04:24:49 PM EST

    It is delightful to be understood. Yes, it's the corporate waterfall method. I tend not to think of software development methods because I am far from convinced that any software development methodology really means anything.

    My claim is actually a bit stronger, though, and perhaps I can address some of the other gentleman's statements. The goal of a software development company is to make money off of software. I've been in the business for a quarter century, and the goal used to be different: to make money off of good software.

    Over the past ten years, it has been discovered that you can make more money faster and more reliably off of mediocre software than you can off good software. Good software just works; you install it, sell it, and that's it. Mediocre software is more effective. First, you persuade your customers that the software that isn't done yet will solve all their problems. They buy it, because it looks good. Then you sell them something that works well enough not to be punished but not well enough to satisfy them. Call it Crap 1.0. You give them Crap Support, which they pay for and like because it gives them someone to yell at. You make Crap Support slow, which they don't mind because it gives them a chance to yell at their subordinates who are designated to call Crap Support. Then you sell them Crap 1.1, which works a little better but still not quite good enough, and so on for Crap 1.2, on up to Crap 2.0, which starts the cycle again. You offer them Crap Subscription Maintenance, which they like because it's cheaper than buying new Crap, and they feel it's a custom business solution.

    At the end of the process, the customer doesn't remember what Crap it was, only how much better it got. When someone asks them what kind of software to buy, they say, "Well, Crap had some problems in the first release, but doesn't everything? I think they've worked really hard to improve it, and it's now Mature Crap." Besided, they've invested a lot of time and effort in Crap, and they don't want to admit that they threw away that money. They act as if it were business as usual. And so, the Crap spreads by word of mouth.

    Now, for the irony-impaired, I have to say that I don't approve of this. However, my comments are not particularly based on contempt. I'm just telling it like it is. I don't think it is ethical, and I think that the overwhelming majority of people in development and at least a simple majority of people who want to go into software to sell business also consider it unethical. However, all you need is one company who avoids those pesky ethics, and all those more ethical companies wind up as red stains on Wall Street. Those who stay in business do so by at least partially emulating the successful companies.

    There are, of course, exceptions to the rule. Game companies, who only need a big rush of sales for the Christmas season and are constantly putting out new games. Companies that provide control software that has to work right or things break and people die. Companies that do not make money off a software revenue stream, such as the one I work for (we rent hotel rooms). But face it. When you go to the store and see any business-related software in shrink-wrapped packages, what it it? It's Crap, because that's what the marketplace prefers. Even the military buys Crap that brings their boats to a stop if you type in a zero (http://www.info-sec.com/OSsec/OSsec_080498g_j.shtml).

    Personally, I would be glad if this were somehow magically to change, but I am not going to hold my breath. I don't hold out a lot of hope for Open Source as a world-changing force either, but at least it offers the possibility of evading the Crap-selection process, so it is still somewhat fun. (This will probably change as more and more companies make money off of support for Open Source software and they see the revenue-stream advantages of slinging Crap.)

    People are going to tell me I'm wrong, because they don't like it. Well, I don't like it either, but I've been campaigning against Crap for a long time now, and the only reason a campaign is necessary is that it is against the swell of the marketplace. Don't shoot the messenger. If you can think of a way to avoid the process, well you know, we'd all love to see the plan.


    The truth may be out there, but lies are inside your head.--Terry Pratchett


    [ Parent ]
    Zoom... (none / 0) (#90)
    by PhoenixSEC on Tue Oct 30, 2001 at 06:54:00 PM EST

    Missed that one, aplogies to the original poster... and I'll rephrase.

    The business model described does run contrary to good software development - all business models do.

    I would even venture that an open source business model does.

    For any open source business model I can think of (save the street-performer style, which is to my knowledge still unproven), the software suffers. Remember, I'm talking business models. Let me explain... open source businesses do not make money on their software (or make very little, if you count the paid for versions, packaging, etc.). In order to pay quality developers to create software, they need to have other things make money, e.g. support. Two issues arise from this: 1) software takes a back seat because, regardless of what a company says, one of their goals is to make money, and 2) the software must be created in such a way that other services are used. To carry our example of charging for support support through, if you release a product that is flawless in both design and implementation, you won't get many support calls.

    Now, as I mentioned, this is for open-source businesses, not projects. An open-source project has an entirely different set of issues that work both for and against it.

    So, in summary, I'd like to say: 1) corporate waterfall, much like the waterfall methodolody, does not work; 2) software people know this (as demonstrated by many people here); 3) open source businesses are no better off.


    p.s., I like the sig.

    Oh, and sorry if the response sounds a little unpolished; I'm late for dinner...

    [ Parent ]
    Losing faith in the corporate method lately (5.00 / 2) (#92)
    by RichardJC on Thu Nov 01, 2001 at 02:47:37 AM EST

    I've started to lose faith in the corporate method recently - not because of all the third party buggy software I've seen, but having encountered a scenario that probably pervades a lot of corporate software shops.

    The manager responsible for development is not the manager responsible for maintenance.

    The manager responsible for development earns more points/whatever for getting development done as quickly as possible

    Problems in maintenance do not impact on the points scored by the development manager.

    This leads to rushed development full of quick hacks. It can be hard to fight to do things properly, and refactoring is not seen as productive, even if it does save time later (in maintenance, if not in later development). This cannot lead to good software.

    [ Parent ]
    Whoever said "panacea"?? (4.00 / 3) (#74)
    by Rainy on Mon Oct 29, 2001 at 11:09:56 AM EST

    To the best of my knowledge, nobody ever claimed OS to be a panacea to security concerns. It *was* claimed that OS is more secure than closed-source.

    So, good job destroying that strawman.
    --
    Rainy "Collect all zero" Day

    Um... (4.66 / 3) (#77)
    by trhurler on Mon Oct 29, 2001 at 11:58:23 AM EST

    Yeah. Have you ever actually done any code verification proofs? Put simply, if I did them for everything I work on, my first project with the company I've been with for two years, which took three months to complete and has seen no significant bugs in the two years since, would still be a work in progress, would have to be written in a different language(meaning a total rewrite, rather than an enhancement of a solid existing product,) because code proofs of compiled languages prove nothing unless the compiler itself was built the same way, along with the libraries, the linker, and all other tools, and so on.

    File this suggestion under "I don't actually do what I'm suggesting, but I know how to solve the world's problems!"

    Other than that, though, I like the story. It has some Captain Obvious moments, but there are a frightening number of "Open Source rocks and we're taking over the planet and all your base are belong to us and hey have you seen the latest screenshots from <insert overhyped game here>?! Yeah, man! I hope they port it to Linux!" kinds of people who need to wake up and smell the fecal matter.

    --
    'God dammit, your posts make me hard.' --LilDebbie

    Cost of defects in Software... (5.00 / 2) (#87)
    by orichter on Tue Oct 30, 2001 at 02:21:41 PM EST

    Many years ago IBM did a study of software defects. Their conclusion: design is the best phase in which to put in all the effort to create secure, efficient, and quality software.

    Here is the cost pyramid they came up with (with reference to the average cost at the time for a single bug found at any of the following phases):

    $1 to fix a problem in the design phase.
    $10 to fix a problem in the coding phase.
    $100 to fix a problem in the testing phase.
    $1000 to fix a problem in the implementatation phase.

    The issue here is to design-in the security from the start during the design phase. It cannot be added during coding without bugs, it cannot be tested for properly without bugs, it cannot be added after the fact without bugs. If the security is to be added it must be there before the code is created.

    Looking at code during the build phase while it is in the code tree is too late and cannot possibly catch all bugs. Some bugs (or insecure methods and designs) have already been designed-in or overlooked before any coding was done.

    Therefore, if one looks at the weekly list of open source software that has had to have bug fixes for security problems, or has just been found vulnerable, one sees just as many listed for the various and sundry components of Linux or other open source programs, as for MS software.

    Why?

    Because most often security has been added as an afterthought and cannot be bug free or totally secure.

    Yes, IBM has done a lot of (hundreds of millions of dollars worth of) research into building secure designs, building efficient and quality code, and building bug free code. Their research is available to those who do not wish to re-invent the wheel, at their website www.ibm.com. There are many papers discussing methodologies that have been proven over the years to lead to secure, efcicient quality software.

    Yes, I am trying to teach all of those who believe that you can just look at code to fix it. Sometimes, it has just been designed wrong and needs to be designed first correctly, then coded and tested.

    Which leads nicely to one potential benefit of OSS (5.00 / 1) (#88)
    by simon farnz on Tue Oct 30, 2001 at 02:32:07 PM EST

    Given careful design such as modularization, an OSS program like Apache or Linux allows someone concerned by security to replace badly designed code with well designed code. Don't forget that provided you don't distribute the result, none of the major OSS licences require you to release the changes (GPL, BSD, MIT, QPL, MPL etc).

    OTOH, this rip and replace is not possible for the end user of closed software; it occurs more than most people think it does during the creation of closed code though.
    --
    If guns are outlawed, only outlaws have guns
    [ Parent ]

    Re: Cost of defects in Software... (2.00 / 1) (#93)
    by tslettebo on Thu Nov 01, 2001 at 11:08:06 AM EST

    >Many years ago IBM did a study of software
    >defects. Their conclusion: design is the best
    >phase in which to put in all the effort to
    >create secure, efficient, and quality software.

    >Here is the cost pyramid they came up with (with
    >reference to the average cost at the time for a
    >single bug found at any of the following phases):

    >$1 to fix a problem in the design phase.
    >$10 to fix a problem in the coding phase.
    >$100 to fix a problem in the testing phase.
    >$1000 to fix a problem in the implementatation phase.

    This is a myth, and it does not hold.

    In XP, you can make that curve flat.

    One article that mentions this, is this one.

    To quote:

    "The fundamental assumption underlying XP is that it is possible to flatten the change curve enough to make evolutionary design work."

    Terje


    [ Parent ]
    Don't counter facts with opinions and conjecture (none / 0) (#94)
    by Carnage4Life on Thu Nov 01, 2001 at 01:37:31 PM EST

    This is a myth, and it does not hold.

    Really, why do you say so I wonder?

    One article that mentions this, is this one.

    To quote:

    "The fundamental assumption underlying XP is that it is possible to flatten the change curve enough to make evolutionary design work."


    Okay, let me get this straight. The cost pyramid is a myth even though IBM did extensive research to obtain it while the article you linked to is true based on the assumptions and opinions of the author.

    I read the article and what I got from it is that if you have good developers with similar backgrounds and knowledgebases you can make XP work. Well, guess what? With good programmers that possess good backgrounds you can make almost any software development model work.

    [ Parent ]
    eXtreme Assumptions (none / 0) (#95)
    by forgotten gentleman on Thu Nov 01, 2001 at 07:05:57 PM EST

    I thought of writing something similar about XP. It is designed to allow fundamental design decisions to occur during implementation.

    XP stacks the deck. It is definitely not meant for large teams; the XP team invented refactoring to allow one sane design to be converted into another; pair programming decreases the chance for an insane design decision; etc. Also, they argue that programming tools have progressed (partly as a result of the IBM study, which is old) to counteract that logarithmic cost scale.

    I understand when people think XP is for lining Kent Beck's pockets. It definitely relies on a lot of hype and desperate CIOs, like Java did. However, past the hype may lie some decent analysis.

    [ Parent ]
    Statistical Distribution (none / 0) (#97)
    by jolly st nick on Fri Feb 15, 2002 at 04:08:08 PM EST

    >>This is a myth, and it does not hold.
    >Really, why do you say so I wonder?

    Just your common sense and experience should tell you that you can add critical features late in the game. It happens all the time and its not necessarily expensive. I think these kinds of numbers are misleading, because the "average" case will include some very costly outlying points. For example, suppose I'm about to field a system, and find out there are ten changes the customer wants. At this stage, nine of them cost $100 to make, but one cost $1,000,000. The average cost is pretty much going to be $100,000 per change, but the median and mode will be $100. In fact, I'd say that in real world situations, while late phase changes are more expensive, they don't cost that much more on average because you just give up on the ones that are too expensive.

    Of course, you want to know about these requirements in advance, because if the milion dollar change is a show stopper for your project, you're cooked. But often it's not.

    A lot depends on the kind of problem you are working on and the methodologies that you are using too. This kind of cost phase relationship comes from "real" engineering, like building bridges. It's very expensive to move a bridge abutment once the concrete has set. Some software projects work this way. Others can't. If late phase changes were always impossibly expensive, then software simply couldn't serve business at all, because in business strategies and needs change year to year, quarter to quarter.

    I would go further and state the heretical notion that sometimes you're better off discovering a requirement later than earlier. Systems affect the operations they are installed in, and there are unpredictable, non-linear kinds of changes. Trying to envision everything that might happen is imposisble, and can allow important business value to be hidden in mounds of hypothetical requirements. Naturally, what you want to know is what the real requirements are in advance, but if you can't, you're not necessarily cooked. You have to focus on deliver real busines value at each stage of the game, learning and growing the system with the evolving enterprise.

    No matter what happens, there will always be the unexpected. That is why the craft of programming is so important, equal to engineering on any project and more important on many. The craft of programming is to build systems that will adapt to the unexpected.

    I think the original point was a good one, but somewhat inaccurately couched. The cost of security lapses may be catastrophic, even if the repair costs are low. Witness Code Red, which probably wasn't very costly to fix from an engineering standpoint, but did huge economic damage and damage to Microsoft's reputation. If you factor in these costs, the cost of making a security mistake is astronomical. Furthermore, if you need to change your software in an expensive way, it means that entire families of security vulnerabilities may inhabit your software for a long time. Only executing signed software from trusted parties is an example. This should have been built into all systems that can receive executable content from the Internet from the outset.



    [ Parent ]

    Statistical Distribution (none / 0) (#98)
    by jolly st nick on Fri Feb 15, 2002 at 04:08:24 PM EST

    >>This is a myth, and it does not hold.
    >Really, why do you say so I wonder?

    Just your common sense and experience should tell you that you can add critical features late in the game. It happens all the time and its not necessarily expensive. I think these kinds of numbers are misleading, because the "average" case will include some very costly outlying points. For example, suppose I'm about to field a system, and find out there are ten changes the customer wants. At this stage, nine of them cost $100 to make, but one cost $1,000,000. The average cost is pretty much going to be $100,000 per change, but the median and mode will be $100. In fact, I'd say that in real world situations, while late phase changes are more expensive, they don't cost that much more on average because you just give up on the ones that are too expensive.

    Of course, you want to know about these requirements in advance, because if the milion dollar change is a show stopper for your project, you're cooked. But often it's not.

    A lot depends on the kind of problem you are working on and the methodologies that you are using too. This kind of cost phase relationship comes from "real" engineering, like building bridges. It's very expensive to move a bridge abutment once the concrete has set. Some software projects work this way. Others can't. If late phase changes were always impossibly expensive, then software simply couldn't serve business at all, because in business strategies and needs change year to year, quarter to quarter.

    I would go further and state the heretical notion that sometimes you're better off discovering a requirement later than earlier. Systems affect the operations they are installed in, and there are unpredictable, non-linear kinds of changes. Trying to envision everything that might happen is imposisble, and can allow important business value to be hidden in mounds of hypothetical requirements. Naturally, what you want to know is what the real requirements are in advance, but if you can't, you're not necessarily cooked. You have to focus on deliver real busines value at each stage of the game, learning and growing the system with the evolving enterprise.

    No matter what happens, there will always be the unexpected. That is why the craft of programming is so important, equal to engineering on any project and more important on many. The craft of programming is to build systems that will adapt to the unexpected.

    I think the original point was a good one, but somewhat inaccurately couched. The cost of security lapses may be catastrophic, even if the repair costs are low. Witness Code Red, which probably wasn't very costly to fix from an engineering standpoint, but did huge economic damage and damage to Microsoft's reputation. If you factor in these costs, the cost of making a security mistake is astronomical. Furthermore, if you need to change your software in an expensive way, it means that entire families of security vulnerabilities may inhabit your software for a long time. Only executing signed software from trusted parties is an example. This should have been built into all systems that can receive executable content from the Internet from the outset.



    [ Parent ]

    Open sourcing happens AFTER the design process (5.00 / 1) (#96)
    by morven on Fri Nov 02, 2001 at 06:28:57 AM EST

    One thing that might influence things is that most projects are not open sourced until AFTER the design work is already done -- and subsequent design work is not done by 'the community', but generally either the project leader a la Linus or the project team a la Apache. In other words, there is little difference between commercial and open source software in the design phase. Since the design phase is commonly accepted as having the largest effect on the security of the finished system, this may be part of why open source projects are not ipso facto more secure than closed source -- just quicker at fixing known bugs.

    The Myth of Open Source Security Revisited | 97 comments (74 topical, 23 editorial, 0 hidden)
    Display: Sort:

    kuro5hin.org

    [XML]
    All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
    See our legalese page for copyright policies. Please also read our Privacy Policy.
    Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
    Need some help? Email help@kuro5hin.org.
    My heart's the long stairs.

    Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!