Security Is Hard

Upon reflection, I think the subtext of my previous remarks was that "SecurityIsHard, and therefore unsuited to XP." Upon reflection, I feel I was trying to use XP as a SilverBullet for security, and failing. I'd therefore like to revise my previous subtext to: "SecurityIsHard, and therefore XP is as good an approach as any." -- JohnBrewer

The UserAntiStory page contains a common misconception about security. You can never have absolute security. Security experts generally measure the security of a system in terms of the cost to penetrate the system. At some point, the cost to penetrate becomes so high that bad people look elsewhere for a target.

You can certainly have one of your NonFunctionalRequirements be that "A successful penetration of this system should cost at least $10**9 dollars.", which should be enough for most purposes, but maybe not for Visa International's master key. Note that you'll probably never actually know if the requirement failed, which is the point of BruceSchneier's article (http://www.counterpane.com/crypto-gram-9911.html#WhyComputersareInsecure) in the first place.

But never say never. Even if your computer system is relatively secure, you can always strongarm someone who knows the password (which is why banks split safe combinations between several bank officers), or fire an anti-tank gun at the vault (a la Thunderbolt and Lightfoot).

To repeat: SecurityIsHard. Regardless of methodology.

-- JohnBrewer


To paraphrase: SecurityIsHard given an adversary with unlimited resources. A recent BruceSchneier article in Dr Dobbs (http://www.ddj.com/articles/1999/9912/9912a/9912a.htm#rf1) ( BrokenLink 20070504 ) looks at modelling attack trees which show a good way to quantify how 'attackable' a system is. I think this is a more reasonable approach than the blanket statement that SecurityIsHard.

Thanks for the pointer to the Schneier article. I've been meaning to read it for some time. It lays out the issues much better than I was able to above. Time to refactor, I guess.

BTW, adversaries don't have to have unlimited resources to make your life difficult. Remember when a couple of grad students were able to crack Netscape's SSL implementation because Netscape's random number generator wasn't? (http://www.ddj.com/articles/1996/9601/9601h/9601h.htm) Even with tools like attack trees, SecurityIsHard. -- JohnBrewer


But does that mean that stating security requirements is hard?

Does it mean, that with a reasonable architecture and a security kernel, that everyone has to worry about security every day?

So does implementing security have to be hard for everyone or is it might be for one security team.

Turning the last, annonymous comment into a question.

I'd put the answer as yes, counter to the previous poster. I'd say put of the reason that SecurityIsHard is that it is different. Security fails at the weakest point. Often this is a human factor, not an algorithm. The simplest security is involves the metaphor of creating an inside and outside within a system one wishes to secure - the analogue of establishing a physical perimeter. Sounds really easy till you realize cyberspace has no inherent geometry. Everything is connected to everything else. You can limit the connections but anyone can re-establish a connection anywhere. Look how emails tunnels from the internet to the desktop. This is also why you can't add security as an extra application feature as well.

And you need an inside - you can't function if you "trust no-one". If the hard disk doesn't trust the cpu, things don't work.

Everyone has to worry about things like not giving away their passwords. Everyone has to worry if someone has walked through the office behind someone else - physical security is information security too. Maybe opaque biometric devices could elimenate this but I doubt it.

So, to an extent, everyone does have to worry about security every day.

-- JoeSolbrig?

Let's try not to get carried away here, folks. Security can be measured a bunch of different ways. One aspect of security for the products I work on is safety against carelessness by a harried nurse or tampering by some pain-addled patient trying to get an extra hit of pain killer. Physical security includes locking drugs in some kind of enclosure -- syringe chambers for syringe pumps or bag lockers for high volume pumps. Programming security includes stuff like passwords, timing delays, remote therapy approval, etc. Usage error security includes detection of free flow, motor runaway, syphoning, etc. This kind of security is to protect a patient's life, so it has to be right. However, the instruments are not accessible from the entire world, they don't involve millions of dollars, and have very few actual failure modes. Maybe I cheat.

In answer to the initial question - yes, stating security requirements is extremely hard. You can give design approaches to prevent known security problems and you can test an application for security problems, but predicting what and where problems may occur in an as yet unbuild system is near impossible.


Perhaps it's useful to make a distinction between high-level security requirements and low-level security requirement. A high-level requirement might be something like "It should not be possible to steal other users' credit card numbers without spending $1m." A low-level requirement would be something like "Credit card numbers are stored in an encrypted form in the database." The high-level requirement is something that makes sense to the client; the low-level makes sense to the developer.

Translating between the two would be the work of somebody experienced in security, to perform an audit of some sort and make specific technical recommendations as to how the current code falls short of the high-level requirement.

No one experienced in security will ever sign off on anything that simply says "it will take $1m to break this system." What if they spend that $1m bribing employees at the client? What if they spend that $1m bribing me, or my teammates?

Also, proving that something is encrypted is very difficult. You can do a quick test that it isn't plaintext. But so what?

But in this context, perhaps it's still possible to have all the other types of negotiation that we want in XP: "This low-level requirement will take X days to implement - yes or no?"


Perhaps the interesting question here is how to approach building an application that has explicit security requirements using XP. How would you avoid the need to spec out a fairly complete security model/architecture up front? --

There's an article in IEEE Computer about JLint which talked about access restrictions which is completely contrary to MethodsShouldBePublic, which is presumably an XP-influenced view. I've been wondering about this but admittedly know too little about security. -- JasonYip


Security should be hard; that keeps JobSecurity a little easier.


Having had experience in the PKI sector, I'd say the biggest difficulty with security is how difficult it is to quantify to the customer. I've never run into even a bank that had someone with cracking skills or knowledge that was in the loop for purchasing security software. All security company marketing people say "Our products are coded to best practices standards of the industry and have been security audited." What they mean depends entirely on the morality of the company officers. This is a highly variable quality in upper management. Sometimes, it means we have several cryptanalysts on staff and they look at every release, we also pay grey hat auditors to look at each release. Other times, it means we had the cryptographer that owns a substantial portion of our stock look at it 3 major versions ago and he said it was secure, so it still is. The company that does the latter has an extreme competitive advantage.

In terms of the actual programming practices, the best thing is to understand the common attacks and basically treat guarding against them as coding and design standards. That really amounts to knowing what classes of messages are risky to send to an object and why. What sort of message handling is suspect and can be used on different platforms to breach the system? Additionally, the system needs to be as simple as possible so someone that specializes in breaking systems not building them really can go over it well. Any methodology is going to be as good as another at this, since with all of them the security really happens with good principles and standards in the implementation. But, BigDesignUpFront can be suspect as it always seems to result in needlessly complicated systems and terrible crunches implementing as time runs out, and programmers that know security well can make serious errors under pressure and sleep deprivation. -- PeterDoak


How much knowledge of basic computer system security techniques should the average programmer have? I'm not an expert by any means, but I often find that I'm the most security-knowledgeable person in whatever room I'm in. Otherwise-bright people see nothing wrong with sending unencrypted passwords over the network ("As long as our software doesn't display the password, it's secure."), storing passwords in a plain text file on the server ("Server administrators can be trusted, and users won't know where the file is."), or using ridiculously simple or ad-hoc encryption algorithms ("Just XOR every byte with 10101010, and no one will be able to read the file.").

Back when I was in school, the only thing I learned about security was which of the teacher's desk drawers had the piece of paper with all the passwords written on it. The current crop of CS grads is a little smarter, because they have to figure out how to access music and porn through the university's firewall, but they still don't know much.

I'd like to think that the people who do e-commerce and banking systems know a little more about security than the average application programmer, but I wonder.

So, if the average is low, how high should it be? What is the minimum set of knowledge and skills that an average application programmer should possess? Or is security so hard that it should be exclusively the realm of experts? -- KrisJohnson

As one of the so-called 'security experts' I'd like to point out that the greatest threat is not physical security, it is people who have legitimate access. Furthermore, excessive physical security tends to make a system very fragile since it necessarily creates single points of access, resulting in single points of failure. For example, if a secure LDAP server is rebooted, it often requires someone to enter a 'master' key at the console so the database can be decrypted. -- RichardHenderson

Which is why it's important to approach security as a risk management activity. Unless you've got a good understanding of the threats, vulnerabilities, risks and impacts, you could be spending all your time on securing the wrong thing (shades of PrematureOptimization, except that you can't run the system and wait to see where the security is weak. The CIA (Confidentiality, Integrity, Availability) definition of information security would mean that a system so "secure" from one perspective that it risked being unavailable isn't in fact secure -- PaulHudson (guess who's looking at BS7799 right now? :)


I'm curious - what is Visa International's master key set to at present? The master key is 12345 ;-D


Old stuff that made the news

Warship outsmarted by software at http://www.gcn.com/17_17/news/33727-1.html

See also: AirplaneSecurityProblem?


CategorySecurity


EditText of this page (last edited April 10, 2012) or FindPage with title or text search