This page is for discussion of the CapabilitySecurityModel, and comparison with other security models such as AccessControlLists.
In a CSM a process has opaque capabilities. If you possess a capability, then you have permission to do whatever that capability allows. The necessary access check only takes place when you acquire a capability. But you can only take those actions by presenting the capability. And anyone who wants to create new capabilities can create a capability to anything they can implement.
A capability is an object reference. If you have an object reference, you can do anything you can ask that object to do. Anyone who wants to create a new object can, and that object can do whatever they can implement. This analogy is exact in the sense that an object-oriented application which puts any necessary access checks in its constructors has actually implemented a CapabilitySecurityModel.
Any safe programming language where you can't forge references to objects is a perfect example of a CapabilitySecurityModel. Of course, this isn't all that useful a statement because a lot of the important details are hidden behind the definitions of 'safe' and 'forge'.
What are the benefits of capabilities?
First of all you solve the ConfusedDeputyProblem because you never lose the connection between why you have a permission and the action that you try to take. Permissioning is not based on global data, the act of presenting a capability maintains the connection between what you are trying to do and why you are trying to do it.
(There is more discussion of the ConfusedDeputyProblem at AccessControlList.)
Another advantage is that if you find yourself wanting to offer precisely controlled access to sensitive things, it is easy to do so. In your constructor you create an object (capability) with methods (permitted actions) that go well beyond what you want, and then return another object that proxies off of the first to only expose the things that you want to allow. In this manner any exact permission that you need can readily be created.
Unfortunately, commercial operating systems do not use a capability security model. However there is nothing to stop you from doing so when you design an application.
The DotNet framework includes a mechanism whereby one can define "permission" objects, and require that a caller hold certain permissions when calling a method. Is this an example of a CapabilitySecurityModel? (Please, no anti-Microsoft rants here - restrict comments to capability-based security issues.)
No, DotNet doesn't use a capability-based security model, because a caller is not required to explicitly present a permission to perform any action; permissions behave as "ambient authorities" in cap terminology. .NET actually uses a model based on intersections and unions of various policies defined using ACLs. The exact rules are obscure and probably impenetrable to most users, which is a weakness in itself, but more important are flaws in how code is associated with particular "zones" (hence policies), and in the ridiculously permissive starting configuration. For example, in the initial configuration of .NET Framework 1.1, all code in the "Local Machine" zone is granted "FullTrust?". This hardly qualifies as a security model at all (no, this is not an anti-Microsoft rant, and it does answer the question).
According to the designer of BlueAbyss,
There is no such thing as "the" Capability Security Model, there are many. To be a security-complete CSM, a model needs only:
Most CapabilitySecurityModels are extremely low-level primitive affairs. This is the case for the model implemented by ErosOs and EeLanguage, and the one intended for Squeak-E by the E-Rights group. BlueAbyss has a high-level model of capabilities, the GeneralCapabilityModel.
Most capability researchers (especially those working on EROS and E :-) would disagree with this. One of the lessons that has become clear from 30 years of research in this field, is "do not make unnecessary extensions to the ObjectCapabilityModel / ActorsModel / LambdaCalculus". It really doesn't need extension. On the contrary, every extension just makes the model more difficult to understand and analyse.
Some of the abstractions claimed to be unique to this "vastly more high-level specification" (cough) are trivially implementable in any instance of the ObjectCapabilityModel, and have been implemented in EROS and E:
Capabilities are the inversion of ACLs. There are things you can do with capabilities that are provably impossible with ACLs (like resolving the ConfusedDeputyProblem); which proves that ACLs are in no way more generic than caps.
A caps model stores a list (what you call an "ACL" even though it isn't) of every legitimate entity with a user while ACLs store a list of every legitimate user with an entity.
Each model is the dual of the other and neither is more generic than the other. However, while Caps can do things which ACLs can't, ACLs claim to be able to do things which are provably impossible to do at all (specifically, stopping a legitimate user from transferring rights to another user).
Why do you say this is provably impossible?
A legitimate user can always act as a proxy, relaying requests from any other user to the protected entities and returning the results of such queries. This can be automated as has been done numerous times on the internet.
As you can see, the only requirement is that the legitimate user have a sustained communications channel (overt or covert) to any user they wish to transfer rights to. In today's world, that is an easy requirement to fulfill. Especially on the internet.
To put it as succinctly as possible, all the proof means is that spying is possible.
As I understand it, the main difference between capabilities and ACLs is that ACLs grant permission on user (or group) based granularity, whereas capabilities provide permissions to each process individually. Therefore it is stronger.
There is no theoretical difference in granularity between ACLs and Caps.
Yes there is: for an ACL system to simulate a cap system, it would have to create and destroy *users* as often as the cap system creates and destroys processes/objects. Since creating or destroying a user normally requires administrator privileges in an ACL system, that is a theoretical difference, not just a practical one. If a theoretical model of an ACL system called for a distinct user to be associated with every process/object, it would be an inaccurate model of real-world ACL systems.
There is a world of practical difference since ACLs are such an unwieldy mechanism that fine-grained protection is all but impossible ....
Regardless, the main difference between Caps and ACLs is the theoretical one, not granularity but, the fact that Caps solve the ConfusedDeputyProblem and make no impossible security promises. And they do this by unifying the entities (users, not objects) which hold the rights-lists with the ones that perform operations on those lists (grant and remove privileges).
ACL is as fine-grain as you want. If you need categorization of the access items, then add it. One can add trees or sets or whatever floats their boat. The good thing about it is that it does not impose nor restrict the categorization/taxonomy approach used. It considers that more or less an orthogonal add-on.
Perhaps you can explain why categorization in ACL schemes is limited to broken models like Unix's groups (sets)? Or why nobody bothers even thinking about upgrading to a sufficiently general or powerful model (DAGs)? Or why every implementation of ACLs just so happens to always be coarse-grained?
I don't know why they are coarse-grained in practice. I don't see any theory that limits them. Maybe some sort of YagNi at work?
The pinnacle of ACLs is probably the scheme used in VSTa. It involves a dotted string of numbers to create a hierarchy of users. This scheme has very limited support for multi-user ownership, so the categorization scheme is something between a tree and a full DAG.
I know of no method of attaching a DAG to ACLs. In contrast, every caps scheme starts with a directed graph attached and has to be downgraded to a directed acyclic graph. So ACLs aren't independent of categorization schemes. Of course, neither are capabilities since it takes real work to couple them to a categorization scheme that sucks.
The "resourceRef" in a generic framework for an AccessControlList is domain-defined. In other-words, it is up to the implementer to decide what the managed resources are and what kind of taxonomy/classification/granularity is defined.
And in any CSM scheme, it's up to the users to define or redefine the taxonomy/classification/granularity as they wish, dynamically, at use-time. It's dynamic versus static typing all over again.
Since ACL are often implemented via a databases, it also has the potential to be dynamic. Rarely are accesses managed via compiled code anyhow.
How does one bootstrap a Capability Model?
System boots, and gives you the capability to find a user's capabilities by that user's password, and only this capability. Given a correct password, you gain access to that users capabilities, and continue from here.
Is this about right?
That's correct. With the addition that capabilities practically require TransparentPersistence.
Revocation
Could one conceive a hybrid between the CapabilitySecurityModel and AccessControlLists? For example, where users hold a list of magic beans (or something), and entities are manipulable only by those holding one of a certain set of magic bean. This would not be more powerful than either capabilities or ACLs, but might make administration easier - rather than just having a 'frob the hodad' permission which is given to, say, administrators and developers alike, there would be two separate beans for each kind of user, and the hodad would allow itself to be frobbed by anyone who held either. Thus, when it was time to revoke the developers' access to the hodad, the relevant bean would simply be removed from the hodad's list, whereas with capabilities, the bean would have to be removed from each developer individually.
The above hack is entirely useless if you have a clean aggregation mechanism that is PART of the object system. Already, directories in filesystems aggregate access to individual files and subdirectories. Having an access-checking system separate from directories is redundant and bad design.
Further, I don't understand why you would want to make a hybrid model of an insecure model and an ideal model. Capabilities can be contrived to do everything that it is possible to do in security. Why would you need to bastardize them?
In particular, why is it a problem to remove a capability from each developer individually? Certainly this is not a problem if you have bidirectional capabilities with an array of various powers. That way you can find everyone who has a capability to a particular object and revoke them all at once.
The question isn't "is there something better than capabilities?" but "what kind of capabilities are best?"
Got a reference for bidirectional capabilities?
I use them but I don't know who else does. Since I'm not ready to write a paper on the subject, what do you want to know precisely?
Just trying to understand the comment above about revoking capabilities held by an individual. I searched Google and the capability references I know and couldn't find anything about them. If they're a private thing, I'm less interested than if they are part of semi-standard practice.
Revokable capabilities is standard practice. Capabilities have various powers. Often they are divided into limited and unlimited categories, and whoever holds the unlimited kind is able to revoke the limited. Having capabilities be asymmetric bidirectional links is just the most intuitive representation of this. The alternative to revokable caps is self-destructing caps on time delays.
The main problem with the literature is that it's abstract and doesn't provide you any feeling for how things work in practice. Specifically, no allusions are made to the higher-level concepts which emerge from capabilities. I had to play with my own model in order to understand caps. In fact, it took me a long time to realize that I was playing with caps at all.
I see references in the literature to revokable capabilities. I don't see references to revoking capabilities for an individual holder. My understanding was that capabilities are properties of a resource, and not an individual holding a resource.
[[Yes, but you can give out different capabilities for the same resource to different individuals. There have been extensions to some cap systems that allow this to be enforced. (Note that these are extensions built on top of the basic capability model, not by changing the model.)]]
''In other words, if I understand correctly, revoking a capability means that capability ceases to work for all individual holders. So does revoking capabilities for an individual mean that those capabilities are unique to that individual? (And in your implementation, a capability not only knows the resources it protects, but also who holds it?)
Capabilities are nothing more than protected references to an entity (a service, object, user or other resource) usually held in proxy by a trusted system. Capabilities can be identical or individual. If they're identical then all the capabilities I give to a given resource are indistinguishable from each other so revoking one of them revokes them all. If they're individual then I can revoke capability1 to resourceA without revoking capability2 to resourceB.
I now recall another model of revokable caps. Or perhaps just garbage collection. :) Capabilities can be indirect, references to a table, so you don't need to know who holds the cap in order to revoke it.
In my model, an object gains a limited capability to a resource only after supplying an unlimited capability to itself. The owner of the resource (whoever has an unlimited capability to it) revokes the limited capability by using the unlimited capability to track the object down. Since capabilities are always matched limited-unlimited, I don't talk in terms of them at all but only in terms of links. In terms of links, you can be on the holding (superior, up) end of a link or you can be on the business (inferior, down) end of it.
And that's only scratching the surface. There's what it means to have a limited or unlimited link. Then there's domains. Then there's half-half links. And none of that even touches cryptocaps or other strange schemes. Btw, passwords and other encrypted tokens are another implementation of caps.
This is the point where I want to bitch about the cap literature for being too abstract or too useless or something.
I think you must not be reading the right parts of the literature. A lot of it is very practically oriented, going into design decisions in considerable detail. For example see the notes at <http://www.cap-lore.com/CapTheory/>.
For instance, a great deal is made of rights amplification, whereas I don't even understand what it means, let alone why it would be useful.
[[Suppose you have a capability A to some object. If holding both A and some other capability B, gives you greater rights to the object than holding A alone, that is rights amplification.
For example, the code that implements objects of a particular type needs to be able to access their private data. In some cap systems, for each type there is a capability that allows obtaining the private data of any object of that type, given a capability to the object. This is a special case of rights amplification called "sibling communication".
An alternative approach that does not require rights amplification is to implement objects using closures, so that the private data of an object is the environment of its closure. But in that case, when you call a method on a particular object, the method code would only have access to that object, not to other objects of the same type. Sometimes this is what you want, sometimes not.]]
But I have a nagging feeling that a large part of the blame rests on their being focused on languages and kernel-space whereas I'm concerned with databases and user-space.
[[Why does this make a significant difference? The concepts are the same. In general, all capability systems can be viewed as virtual machines that restrict the code running on them to performing only operations allowed by the capability model. That's true whether the VM is a language implementation, an OS, or a database, and regardless of whether it uses any protection features of the underlying hardware.]]
Practical Examples and Implementation Guidelines
OK. I've read the articles, papers and WikiPages. I get it. I'm sold. What I'm not really clear on though, is how I can actually go about using a CapabilitySecurityModel to build an application here and now. Let's say I'm building a typical corporate web application which, as is usually the case, allows admins to assign different rights to take different actions on the various objects in the system to different users, groups, roles, etc. Although there are many possible variations, I basically know how I would build such as system using ACLs.
But how would I go about building such a system using Capabilities? What would the object and/or data model look like? What would user interface look like? See CapabilityUserInterface. Obviously, I could sit down and figure some or all of this out for myself, but I'd just as soon gain the benefit of other's experience before I started to make a mess of it on my own.
[[rant about the GeneralCapabilityModel snipped, not relevant]]
Reading about the ActorsModel or TheWebCalculus would be a good start.
It appears CSM is a TechniqueWithManyPrerequisites such that a shop pretty much has to toss and burn its existing stack and start over from scratch with something more "monolithic" such that the parts pretty much have to be integrated on a binary level. The rest of us who don't have the authority and budget to play IT God use AccessControlLists instead. --top
CSM can work in a top-down manner, e.g. starting with web applications and scripting languages. But it is true that the ObjectCapabilityModel communities historically favor a bottom-up approach that builds a whole new foundation, thereby becoming a TechniqueWithManyPrerequisites (which is just another way to say NetworkEffects and PathDependence).
Anyone wish to illustrate a top-down example for those unable or unwilling to start at the bottom?
I don't want to spend much time on it. But WaterkenServer? (http://waterken.sourceforge.net/) and CapabilityUrls? (http://www.w3.org/TR/capability-urls/) are pretty decent examples at the application layer. At the language layer, CajaProject? (http://en.wikipedia.org/wiki/Caja_project) was a development towards capability-secure JavaScript. While it hasn't penetrated, it has influenced JavaScript's development to an extent that (with great discipline, or as a target language of a compiler) JavaScript can at least be used in a capability secure manner (cf. Microsoft's F* (EffStar?) project; Fully Abstract Compilation to JavaScript - http://research.microsoft.com/apps/pubs/default.aspx?id=176601). Anyhow, if you're really interested, then stealing ideas and patterns from Waterken is a good place to start. TahoeLafs? (http://tahoe-lafs.org/) is also a good project at the middle layer, providing a capability secure distributed filesystem.
Most systems, especially the nearly universal Unix and Windows derivatives, have no security model to speak of beyond memory protection and file attributes. Clearly, any new design must have a security model as part of its initial design. There is no excuse for a poor security model, when several good ones exist.
To take one example, the capability model of security is often discussed, but rarely implemented and, I think, poorly understood (indeed, I may be mistaken in the particulars myself). The basic concept (as I understand it) is simple: each user or process has certain rights, which the system checks against on every attempted action. This often incorporates of form of the visibility model: if user doesn't have rights to access something, it should be invisible to them, or otherwise disabled (e.g., GUI menu items should be grayed out or removed if the user has no access to them or if they are meaningless in the current program state - a practice that should be followed more assiduously in general, anyway).
That having been said, security cannot be seen as a matter of setting a policy and hoping no one finds the inevitable flaws in it. Perfect security is impossible, and static approaches to security are easy to overcome through sheer patience and effort. To provide adequate security - for both denying and permitting access - one must constantly work on modifying the model, as well as the implementation. -- JayOsako
Windows NT has security built in to the lowest level of the kernel. The privileges of a process are defined by bitsets that define which operations they can invoke on each type of OS object (process, thread, page, file, device etc.). Unfortunately, most Windows software is designed to run on both NT and the "Wintendo" operating systems, and so NT's security features are not used as much as they should be. -- AnonymousDonor
Capabilities means that the users hold access rights to objects. With Access Control Lists, it's objects that hold access rights for users. Failure in ACL-based systems comes about because only users can know who should be granted access rights. Separating responsibility for access rights from expertise for same is a catastrophic mistake. There are many examples of ACLs failing but I don't know of any example of capabilities failing. Capabilities fail only if a user grants access unwisely but that's not a failure of the mechanism, just a failure of (individual or social) policy, so arguably capabilities are not subject to failure.
The human level of security is where things get interesting. Is your model of user-space a dictatorship or a democracy? Is it totalitarian or anarchist? Is it individualist or communitarian? Who holds power over whom? This is where the fun begins. This is where failure can occur. This is where things have to change all the time, constantly adapting to changing circumstances.
An OS designer should only provide the mechanism of power (users having control over resources and power over each other; reality provides this mechanism in the form of a clenched fist) not the policy. Any OS designers who, like the creators of Unix, start out by saying "this will be a strongman dictatorship" are not deserving of the title. They may also be revealing something about themselves. I know that it says something about myself that my very first thought on the subject of OS design was displeasure at the fact that Unix is Fascist. It seems obvious with Unix's strong fear of user self-management and object-sharing, and an eagerness to use superuser privileges, ACLs, braindead security models and OS obfuscation to combat freedom.
EROS implements capabilities by providing system calls that modify a kernel-space proxy of users. I don't know how, or even whether, it has capabilities in its filesystem; I don't think capabilities and AccessControlLists are compatible.
Except that EROS has neither users nor filesystem. In addition, if you'll bother to read the EROS pages, you'll see that capabilities are compatible with AccessControlLists. A Unix simulator (with internal users and filesystem) can be programmed above a CapabilitySystem. It does not work the other way, though
Windows NT may have security built into the lowest levels of the kernel (I'd like an explanation of how this relates to high-level design in the first place) but does it have security anywhere else? -- RichardKulisz
It seems to me that capabilities, ACLs, etc. can be strong-arm-combined, I think. I might be making no sense here, and doing the usual wild useless abstractions, but...
I think you can build your OS on the basis of capabilities. Capabilities, if our collective ComputerScience understanding is correct, are provably secure, minus - of course - bugs, errors, and around-the-side subversion of the access system (otherwise known as booting the system with a prepared floppy in the drive). ACLs and Capabilities, despite popular opinion, are not two different ways of solving a problem.
Now, a capabilities system needs to be "bootstrapped". The way that EROS handles things is to create a completely persistent OS/400-like system, so that the system returns to a known state when it is started up. There's probably a way to do this with kernel-escrowed tokens that doesn't require complete persistence. But, you can take an ACL rule and determine denial and permissions based on the ruleset and distribute the capability tokens to the various users.
Why would you do something this brain-dead when capabilities are already in existence? ACL systems and User/Group/World systems are going to be around for long after our hypothetical perfect operating system is available. So you can plug in whatever access control model is relevant for a particular set of resources. You can bridge between capability systems with kernel-level trusted objects. If you are creating a microkernel system, you could use that as your most basic form of access control and leave the rest up to your personality server processes.
Of course, I am all about the Micro Kernel, even though Linus has shown that the big kernel works just fine, too. ;) -- KenWronkiewicz
Questions
There's quite a lot of information available as theoretical analysis and exposition, but where is the straightforward example of how a CSM would work in a common scenario, say an ATM? Sam Saver and Betty Bank Manager want to know!
User goes to ATM, plugs in keyfob which contains unforgable key to his account. ATM uses key to verify that he has the $$ that he's trying to withdraw. ATM forwards key to bank to ensure that it actually is a valid account. Money is output if correct. If the user's keyfob is stolen, he can use some other unforgable ID to have it invalidated and get a new one. He can also use his account to automatically delegate sub-capabilities--for instance, he can give his kids keyfobs which only let them withdraw $10/day and notify him whenever they do withdraw. Unfortunately, there's really only one thing you can do with ATMs, so the CSM really doesn't offer any advantages aside from delegation--ask about something where delegation of capabilities is necessary, and prone to error :).