Open Source Is More Secure

Moved from WillOpenSourceUndermineTheAmericanEconomy:

The Economist Magazine: http://economist.com/business/displayStory.cfm?Story_ID=620445 published the following observation about the security of open source software: "Most important, it tends to be more robust and secure, because the source code can be scrutinised by anyone, which makes it more likely that programming errors and security holes will be found."

The point they are making is that for software systems, security has everything to do with peer review. Peer review is how all scientific and engineering communities improve the quality of their work. BruceSchneier wrote: "Good cryptographers know that nothing substitutes for extensive peer review and years of analysis." Schneier's own algorithms (like Blowfish and Twofish) are open for peer review.

Several security experts recently discussed and refined a summary statement of rationale on OpenSourceSecurityStrategy, for circulation at the "Open Source Lab", featured at the 2003 Government Technology Conference and Exhibition http://www.gtecweek.com in Ottawa, Canada.

Following are some additional points...

Cryptography and system security are two totally separate issues. In cryptography you can protect security algorithmically. In system software, you must anticipate what people who write applications will do, which is always going to be losing battle against new security holes. Arguably the most secure os is MVS, which along with having really tight system security is also rather unappealing to hackers and fading rapidly into obscurity. The point I was trying to make is that article implies that having security holes found means that the platform is more secure. If no one finds the security hole, then is there a security hole? If someone does find a security hole and broadcasts it, but most systems admins don't care, then there is actually a loss of security for the platform as a whole. --TimBurns

The problem with that point of view is that it ignores the fact that the black-hat crackers are motivated to find security flaws, either for tangible gain or just to prove how "133+" they are. MSFT software is a huge target for folks who just want to find the holes for that reason, and they continue to bang away at it and every new version and extension. But, they will not have any interest or motivation to notify you, the IT manager, that there is a new exploit for the software you are running, they'll just use the exploit to make your life more hellish than it already is. So, the white-hats, those hackers who get paid to sift through closed-source systems to find the holes and then tell the people who care about securing them, are your only defense, and at any given time you never know if the white-hats have found all the same exploits the black-hats know about and are actively exploiting.

On the other hand, with OpenSource, the white hats and the black hats are on an equal footing from the start. In fact, any developer who is familiar with how to write secure code can satisfy herself that the program is clean, without needing to get into the black arts of decompiling, reverse-engineering and the other convolutions necessary to root out the holes in closed-source executables. -- StevenNewton

(Is this related to TestsCantProveTheAbsenceOfBugs?)

OpenBsd has extremely good security, from a combination of OpenSource and systematic code reviews. Just enabling peer review isn't enough; you have to make sure that someone actually does it.

Indeed, the "more eyeballs make all bugs shallow" take on peer review is only partly true for security. It depends on what your peers are looking for. In open source projects, code is seen by many people. But they only look at a small part, or only to hack in a new feature. Very little of the 'peer review' is security related (and of quality). -- PieterVerbaarschott

It is incorrect IMHO to try to analyze security out of social context. Most systems can be hacked in some manner (look at military history). The comment that "security by obscurity" is no security at all doesn't hold water - it all depends on who the hacker is.

Also, AFAIK there is no OS/Server software in common usage which has not had several security holes in it due to coding errors, either open source or not. And since a miss is as good as a mile (i.e. it only takes one bug to be compromised) all systems are potentially insecure. It would be unwise to rely on any single system for security. -- PeterForeman


See also OpenSourceIsLessSecure

CategorySecurity


EditText of this page (last edited January 22, 2006) or FindPage with title or text search