Confused Deputy Problem

"The Confused Deputy (or why capabilities might have been invented)" (1988) by NormHardy? (http://www.cap-lore.com/CapTheory/ConfusedDeputy.html)


Programs generally take actions on the behalf of other programs or people. Therefore programs are deputies, and need appropriate permissions for their duties. The ConfusedDeputyProblem happens when a program which has permissions given to it for one purpose applies those permissions for some other purpose that is contrary to the original intent of the permission, and therefore allows something that it shouldn't.

A classic example involves a program that needed write access to its licensing information so that it could track usage. The problem came when the user accidentally asked the program to remove the contents of a folder that happened to contain the licensing information. The program should not have done that, but had the necessary permissions and did. A more up to date example would be a program that needs read access to the local file system to get information needed to display information, but which can be tricked by the user into reading part of the file system that it should not, for instance displaying /etc/passwd. Many more variations exist.

Generally, people say that this happens because the application had security holes or was poorly coded. However, when constant vigilance becomes necessary to keep things from going on, it is good to find a more fundamental cause. In this case, a more fundamental cause is that no direct connection is maintained between what the application does and why it has permission to do that.

Any examples which aren't just cases of file-specifying functionality being carelessly combined, without security checks, with data handling?

{I agree. This seems like a data-entry problem, not a security model problem. Somebody or something did not keep the "access lists" up to date or did not apply them.}

That sounds to me like a "blame the user" response. In any case, it's not clear that there is any possible configuration of the access lists that would solve this problem, in general. Nor is it clear that it should be the deputy's responsibility to perform additional security checks. The system should enforce security checks; application programs shouldn't have to duplicate them.

It isn't an easy problem to keep the access lists up to date. Managing the many-to-many relationships of what can access what is a big job. Companies often try to take shortcuts or skimp on staff, and it bites them in the arse.

Any examples which aren't just cases of file-specifying functionality being carelessly combined, without security checks, with data handling?

Yes. The CrossSiteScriptingAttack?. See http://lists.canonical.org/pipermail/kragen-tol/2000-August/000619.html for more examples.

The DEC example seems like a case of giving out too broad a security clearance. They should not have given wild-carded access, but granted it to something more specific. The Mary example is pure deceit and social engineering, like putting up a phoney ATM in the mall. Besides, machines cannot read intent. Nothing will fix that, barring a brain scan.

The point is that a CapabilitySecurityModel coupled with the PrincipleOfLeastPrivilege would have prevented it: Mary's code shouldn't have been given the capability to delete Jack's site, but because programs were running around with the complete authority of Jack, an action was specified separate from the authority by which it would perform it.

In that case treat a given program like a user. It is given a login and a set of appropriate privileges. If a program is given too much rights, that is the fault of the rights giver. In practice it can be a lot of labor to micromanage each privilege for each thing or person, so "wild-carding" may be desired. If something tricks a legitimate user/thing into doing bad things, there is no generic way to stop that. Barring an AI breakthrough, computers cannot read intent. A fingerprint reader cannot tell if the user has a gun to their back. Encapsulation is no solution to the misusing of legitimate interface to achieve evil goals.

{Rights management of gestalt programs DOES NOT solve this problem. Even if you could maintain perfect many-to-many ACLs for programs, it simply doesn't help the ConfusedDeputyProblem. The problem isn't controlling the rights of the program; it's controlling the rights utilized by the program when performing each particular action. Making fine-grained management of these rights easy for programmers is the solution. At best, even a strong AI could only help you choose which combinations of capabilities to utilize when performing certain actions on behalf of the user.}

No AI breakthrough is needed to stop the vast majority of ways in which programs can currently be "tricked into doing bad things". Treating programs as principals is necessary (and capability models do that), but is not sufficient. If we attempted to treat programs as principals in an ACL model, the problem of managing the ACLs would immediately become intractable. Capability models simplify the management problem by making use of the following facts:

By making authority inseparable from designation, we can make it practical to have very fine-grained authorities, instead of just possible in principle.

How would it solve the given examples?

I try to understand systems like EROS by likening them to a restricted Smalltalk. They have a persistent image that gets snapshotted to disk. There are processes/objects in the snapshot. You store data in objects instead of files. Unlike smalltalk you cannot find, act upon, or "touch" anything you do not hold the capability for. A capability is like an object+method pair, but opaque. Without capabilities given to you, there are no objects to act on and no methods to execute.

So in the license tracking example, the process that tracks licenses would hand out "update my license DB" capabilities. That's all you can do, period. You can't dig the object reference out and toy with it, so there is no chance for you to delete the license process and its data.

Presumably the "bestow a capability to update my license db" capability would be held by the same process that is starting licensed processes.

This sounds a lot like OO with data hiding, and in that aspect it is.

The web examples are hard because the web is defined to run over HTTP and HTML, with other entities that may not be in your security domain. ASSUMING all data going out the capability-holding server was encrypted with keys that only existed inside another capability-protected security domain, we could talk about using this model on the web.


Does Capability Security assume an ALL-OOP design and a single language? That would be rather limiting. "Can't use it for the web because the web doesn't use the right paradigm/language/tool" is a poor answer. I'm starting to suspect it requires a GodLanguage to work "right".

Capability security DOES NOT assume ObjectOrientedProgramming, nor does it assume a single language. While ObjectCapabilityModel was originally developed in context of OOP, the resulting principles and patterns have since proven applicable to various other paradigms (such as FunctionalReactiveProgramming, DataflowProgramming, ComplexEventProcessing). And the same ObjectCapabilityModel principles are already being applied for secure web development, i.e. by creating 'web-keys' - basically URLs with large random strings. If you do some searches on 'webkey', 'oauth', 'WaterKen?' you can learn about work done on this subject. You have likely seen various applications of webkeys already... e.g. when a URL for resetting a password is sent to your e-mail.

There are significant advantages to be had by supporting ObjectCapabilityModel within a language. You might look at three desirables: security, performance, expressiveness. Remote or multi-process services allow you to achieve security and expressiveness, but sacrifice performance. Sandboxes allow you to achieve security and performance, but sacrifice expressiveness. Ambient authority languages give you performance and expressiveness, but sacrifice security. But languages supporting ObjectCapabilityModel achieve security, performance, AND expressiveness. (They do require you learn some new programming patterns, though.)

Even without ObjectCapabilityLanguage, the disciplines and patterns associated with ObjectCapabilityModel tend to lead to relatively safe and extensible program architectures.


This seems very similar to point 4 in the qmail guarantee at http://cr.yp.to/qmail/guarantee.html. I tend to agree. The confused deputy problem is much more extensive than filenames; consider specially crafted input to HTML forms to tamper with SQL queries, or to embed data into SQL data to tamper with the HTML which would be presented as a result. Consider game cheating programs such as aimbots, interface scripts (yay tradewars) and the ilk. Consider cracks delivered through replacement DLLs, misinformation delivered through non-authoritative clients, the traditional solaris font privilege hack. This is a legitimate category.

-- JohnHaugeland?

See SqlStringsAndSecurity for more on that.


See also: CapabilitySecurityModel, AccessControlList, MixingParadigms.


CategorySecurity


EditText of this page (last edited April 9, 2010) or FindPage with title or text search