Computer Security Isa Labor Race

Rather than ComputerSecurityIsImpossible, I think computer security is ultimately a labor race. You need to spend roughly the same amount of labor as the attackers to keep them out. If there are a thousand hired hackers trying to get into your computer network and you only have one security expert checking and monitoring things, you are doomed. You'd need at least a similar order of magnitude of hired effort to keep them out, or at least reduce the risk to an acceptable amount.

Bots don't really change the equation because both sides have access to tools.

--top

I'd agree that security, in general, is something of a labor race. The "roughly the same amount of labor" part of the equation is wrong, though. The best security algorithms out there give you an exponential increase in the labor-cost of the attacker for a linear increase in labor cost of the defender; a typical example is public/private key or shared key encryption. And some of the worst security equations in common use give a cube-root or fourth-root improvements to security for their cost (e.g. to double your protection, you need 8 to 16 times as many resources - this would apply, for example, to protecting against neutron-radiation sources).

That said, so long as security is correctly implemented at all levels, computers can maintain an exponential-to-linear cost for all unauthorized resource access. If all this was done correctly, there is nothing a 'security expert' could do to help, even if hanging around and monitoring things. Of course, there are real issues in programming security (capability security isn't part of language primitives yet), so it is hard to do correctly. There are also real issues in the PEBKAC sense - if you have a thousand human users, you have a thousand vulnerabilities. For them, the best bet in maintaining security is a combination of education, incentives (benefits or punishments for maintaining or breaching security), smart-cards, biometrics, and passwords.


I think the point of Top is not that about theoretical security complexity but rather about the reality of lots of bad or at least no proven correct code. And if you have N pieces of code and n of them are not proven correct and contain with %x a security flaw, then meticoulously checking may find it on either side with equal propability. --AnonymousDonor

Re: "so long as security is correctly implemented at all levels" - Yes, but that is not going to happen unless you have staff to enforce and inspect. On rare occasions a committed individual or small team may be able to pull it off, but I am comparing average company to average hackers, not the best on each side. --top

Heh, I don't disagree. I don't think even "staff to enforce and inspect" will do the trick... not even if you just limit your discussion to secure code rather than secure users. I believe security ultimately needs to be enforced by something far more rigorous: language type-safety systems and system-wide program analysis. In addition, security needs to be made easier to work with, e.g. via integration of capability security model with programming languages, possibly integrated all the way down to the kernel or hardware level (which would ultimately require compilation to higher level code than the typical 'ELF' files, though the kernel could compile it further). At the moment, the tools we programmers have for proper and efficient implementation of security are inadequate, as is integration of security mechanisms and policies.

But I'm considerably more optimistic about security in general than you seem to be. I don't believe this to be merely an issue of numbers or skill - the crackers vs. the hackers. If the tools were adequate to get the job done correctly, the hackers building the secure code have the inherent labor/resource advantage based on use of encryption (for data protection) and signatures (for certificates and rights management). Short of quantum computing making time-linear improvements to the number of qbits or Moore's law continuing forever and a day, there is simply no keeping up with exponential-cost equations (and Moore's law simply creates a window of security, where you can start effectively decrypting old stuff some number of years after it was encrypted. Also mathematicians, wonderful yet impractical creatures that they are, have already begun hammering out some quantum key-encryption schemes that can stymie even quantum computers... should it come to that). And I have no doubt that right tools will be built, alongside good tools for dealing with concurrency and workflow and better tools for dealing with data and pattern recognition. We've come a long way in the ~70 years since computer science became a real field, but we're still a young discipline with plenty of areas to grow into.

Thus, while I agree that the current realities on computer security are somewhat bleak, I don't believe it to be a permanent condition, or one that should be considered such. Considering the human side of things, with foolishness and inside attacks and social engineering being vulnerabilities in every human authorized to use your system, it might arguably be the case that 'security' in general is a 'labor race'. But calling the field of 'computer security' in particular a 'labor race' with the property that 'you need to spend roughly the same amount of labor as the attackers to keep them out' is essentially to give up on it prematurely. After all, the number of potential attackers of a system essentially grows linearly with the population, whereas the number of people offering labor to defend any given system is divided among the number of systems needing protecting, which also grows linearly with population. While the number of systems grows linearly with population, it is the popular targets that will receive the attention of attackers - and, given that human attention is finite, so is the number of popular targets. This is a balance of equations that severely favors the attackers if your statement were to hold true.

It may be better to say that computer security today is, perhaps, a technology race, not so much a labor race. Improved tools for security - even such things as encrypted VPNs and SSL - allow a single security guy to set up a wall that can keep out whole hordes of uncoordinated hackers from a particular vector of attack. The real problem is that at the moment there are too many vectors of attack that are unprotected. But the technology of the defender has a very significant, fundamental advantage, and that technology will improve to cover the undefended vectors. But even calling it a 'technology race' seems incorrect; the attackers are all about exploiting the undefended spots - the chinks in the armor, so to say. It isn't as though the attackers are developing technologies that can actually punch through the armor. Thus all the computer security field really needs to do is (a) make it easier to find and fix exploits before release, and (b) make it easier to avoid exploits in the first place by making security as easy as a few declarations of desired security properties when initiating communications or a few declarations regarding the security-level or privacy of a particular data-item, etc.

That sounds rather expensive. I still think that inspection and study of say data traffic is still necessary because the best laid plans of mice and men may still have human error in them. You can't just build what is assumed to be a perfect machine and then never check on it, leaving it to be by itself to be "perfect". Further, such tight integration makes multi-tool projects more difficult. You can't use the best tool for the job, but rather be choosing tools on the basis of being "security-first productivity-second". I agree there may be a place for that, such as a nuclear weapons lab, but that would make most typical businesses uncompetative. Plus, a spy employee could simply take out printouts or thumb-drives for much less cost than security-bound software unless you want ass-tight[1] physical security that makes the airport look lax.

[1] Both figuratively and literal.

--top

RE: You can't just build what is assumed to be a perfect machine and then never check on it, leaving it to be by itself to be "perfect". - And why not? I mean, of course you ought to test the machine (or a prototype thereof) and put it through its paces. But it isn't at all unusual to write software, get it working to the point one can reasonably assume it is 'perfect' for its task (as in: meets all requirements without wasting significant resources), and never look at it again unless it becomes obvious (from other clues) that it has become broken. Besides, analysis of traffic data doesn't really help all that much in any large multi-user environment. The noise drowns out any signal you might be attempting to acquire, and anybody really determined to avoid notice can control their traffic signal, too (e.g. GrammarVandal would not easily be recognized for his edits if he had a bot that distributed them over the course of a week).

Anyhow, even supposing you wanted to keep some person on the job to repair leaks and do damage control (a perfectly reasonable desire), it's worth noting the really important facts: you only need a couple guys to do it. If the basic computer security is sufficient that leaks and damage-control are rare, you don't "need to spend roughly the same amount of labor as the attackers to keep them out". You might (due to need for security at the point of human vulnerability) need to spend an amount proportional to the number of -users- of your system. But that is not the same as the cost being proportional to the pool or population of potential attackers. You can charge per user in order to recoup the costs.

RE: Further, such tight integration makes multi-tool projects more difficult. - Or, from another perspective, multi-tool projects become very easy so long as you pull them all from the same, integrated toolbox. You don't need to fight the system and re-implement security at each communications boundary. And you can still abstract out the tools that you want to plug into your system for portability reasons. Anyhow, top, think loooong term. Imagine a world where the toolbox already has highly productive tools AND they are secure. Creating libraries tooled towards particular domains that integrate well with one another is non-trivial, but it has been done over and over and over... and experience indicates it would happen again with a language and OS designed for concurrency, security, and workflow.

RE: a spy employee could simply take out printouts or thumb-drives for much less cost than security-bound software unless you want ass-tight[1] physical security that makes the airport look lax. - If the spy has access to the information, certainly. Hell, he could memorize the most important bits and cart his brain out of the system. I never hesitated to note above that computer security is only half the issue when it comes to 'security' in computerized environments - the other half being PEBKAC in nature. Of course, as noted above, it changes the cost function if you don't need to worry about 'unknown' attackers that lack authority to access the system. Where incentives and punishments and deterrents like logging access fail to keep authorized but disgruntled employees in line, you ultimately need the ability to revoke rights to the system in order to prevent further damage. But there are ways to ensure that even taking the hard disk drives or servers home won't help a spy access secure data - mechanisms designed to protect against corrupt root administrators that involve obtaining partial keys from N of M (e.g. 3 of 5 ) computer systems before a file can be opened.

Anyhow, I'm left wondering what you found "expensive". I mean, admittedly the research phase and implementation phase of languages designed to support security will be expensive, but academia is already footing the majority of that bill - probably several millions of dollars in research, development, and prototyping costs. Corporations like IBM, Sun, and Microsoft will probably foot another big chunk - when it comes to integration - at no significant direct profit to themselves (seeking, rather, a competitive edge and opportunity to create and control a new market). That shouldn't be a surprise, of course, though integrating the fruits of these labors into business systems isn't going to happen in a hurry. And, over all (accounting for savings for every company that can drop one or two security guys from the payroll), it is almost certainly less expensive - that's not even accounting for reduced fraud, insurance, and opportunity costs associated with insecure (or even believed-to-be-insecure) computing systems.

RE: (RE: Further, such tight integration makes multi-tool projects more difficult.- Or, from another perspective, multi-tool projects become very easy so long as you pull them all from the same, integrated toolbox.) - I disagree because often the best tools happen by accident (trial and error of the market-place). If every tool ever made had to conform to a strict security framework, then many good ideas would be left out because most would not target such an environment. The many would suffer for the few. Perhaps this is really a form of QwertySyndrome at work.

I fail to understand your objection. Why do you believe that good ideas would be 'left out' simply because the tools would exist to easily implement and utilize them in a strict security framework? Which ideas do you feel would be 'left out'? Can you think of even one tool involving communications thats effectiveness would be diminished if it were implemented in a secure environment? Can you think of even one tool involving communications where easy access to correct security wouldn't possibly be beneficial?

Though I am not entirely sure, you seem to be operating under the assumption that ready access to strict security limits the tools you can build. But that would be patently false: it is trivial in a system that supports general security policies to declare open access to any particular service. Further, secure and typed systems are still Turing complete - any computation you can implement in an insecure and unsafe language or operating system can also be implemented in a secure and typesafe one.

If there is any "QwertySyndrome at work" here, it is in continuing to use inadequate tools and techniques for implementing security (to the point that it is surprising when a service or OS isn't full of vulnerabilities) simply because it is too much a hassle to build and learn the tools to do it correctly, and because (much like moving away from the QWERTY keyboard to something designed for typing speed) there would be some large migration costs when comes time to learn the new tools and re-implement existing services to work within a provable-security environment.


FebruaryZeroEight

CategorySecurity


EditText of this page (last edited July 9, 2010) or FindPage with title or text search