Are Types Tied To Trees

Most static type systems seem inherently tied to trees in one way or another. This seems to stem from the need to have one and only one dispatching answer for "what type am I?" or "can I perform this operation?" (see TopOnTypes). In order to have a fairly quick answer to such questions, a compiler cannot get lost in open-ended graphs. Thus, a static typing system seems either tied to trees or a directed graph.

Even in dynamic languages such as smalltalk, the "who answers me" search is generally a search up a class hierarchy.


Pretty ambiguous question. Trees of inheritance relationships? Convertability relationships? Composition relationships (as in, a struct is composed of fields, which each have a type)?

The most ubiquitous statically-typed language is C, and there is no obvious tree of types there.

In C++, the presence of MultipleInheritance to denote is-a relationships immediately negates the idea of trees among user-defined types. Rather, we have acyclic directed graphs, which are not trees. (Although you mention them in the opening paragraph.)

But you said "most", so just looking at C and C++ is insufficient. A wider survey of static type systems in implemented languages would be needed to answer the question.

Consider an operator which can handle a specific set of argument types -- call them the declared types. In a valid call to this operator, you could:

Since the first option is possible, I'd say that types are not inherently tied to trees. Built-in types can be defined without reference to other types, so certainly built-in types don't even need to have a tree implied via composition. But requiring argument types to match an operator's declared types seems rare these days, so in practice I'd say either an inheritance or convertability graph is usually present in a type system.

Just thoughts that came to mind, with no pretense of being rigorous or complete.

-- DanMuller

Well, the C family is kind of "on your own" with regard to types. But languages which Costin may consider to follow TypeTheory well seem bound to some kind of tree or restricted graphs. Perhaps another way of saying this is that the more the compiler can check and prevent, the more the typing system relies on trees or other restricted graphs.

Any sound type system must be a DirectedAcyclicGraph (which trees are a subset of). A type graph with cycles is inherently unsound.

[This follows, more or less, from the "theory of types" (a matter of pure math, not at the time related to then-non-existent CS/computers) developed in the early twentieth century to improve the foundations of mathematics by disallowing the RussellParadox. It became illegal to allow sets that do/do not contain themselves as members, roughly speaking. Instead a set at level A could only have as members sets at level A-1 or lower. Thus the overall inclusion mapping was restricted to a DirectedAcyclicGraph rather than being allowed to be an unrestricted graph. The previous anything-goes approach allowed paradox. Thus the comment "inherently unsound".]

[Modern CategoryTheory seems to follow a similar philosophy; morphisms go down-level, never up-level, I believe, not that I'm an expert.]

[It should be noted nonetheless that this "theory of types" approach to avoiding paradox is far from the only approach invented to do so, so I would be reluctant to say that it would be impossible to develop a sound type theory that allowed unrestricted graphs -- but the burden of proof would be on that system.]

[...aha, I should have realized there are related pages here: SetOfAllSets UniversalSet NewFoundations]


No. Types are not tied to trees. Types for both objects and values are 'patterns', which themselves are abstracted propositions (e.g. predicates are one form of pattern, so predicates can be used as types). E.g a number can be in integer, odd, AND prime... 3 different types. A person can be a human, an employer, a yuppie, AND heterosexual... 4 different types.

By fundamental nature, types don't have objects. Types are patterns... they might be associated with objects by an astute observer or inference system... but the 'types' themselves exist only as patterns in the observer, not the object. Platonic forms are an inversion, a perversion... 'red' and 'triangle' have no reality in some objective universe; 'red' and 'triangle' exist only inside we who understand the universe by the use of patterns.

Trees and DAGs work well enough when you aren't dealing with a wide variety of objects. They work when the problem domain, itself, has a very clear 'inheritance' hierarchy. E.g. all triangles are polygons; all polygons are Area-Shapes (which might include bloboids, ellipses, etc.), which have surface-areas. All Area-Shapes are Shapes (which can include volume-shapes, line-shapes, logical-viewpoint shapes, etc.) All Shapes have a visual representation. Etc. Etc. Etc.

However, the use of these hierarchies begin to die a creeping, painful, hideous and 'cancerous' death of uncontrolled, exponential growth in the number of required 'types' when dealing with problem domains that do NOT present clear 'inheritance' hierarchies. For example, let's add color... now there are 'red' things and 'green' things and 'blue' things. A red triangle is a triangle, and it is also a red thing. Red things have the property of enticing bulls to attack.

Now, some very intelligent designers have come up with a number of partial solutions to this sort of problem... they can add a color to every shape, for example... and say that the shape 'has a' color rather than 'is an' instance of a thing of that color. The BridgePattern, AdapterPattern, FacadePattern, DecoratorPattern etc. are all mechanisms developed largely to mitigate the problem associated with using tree-based hierarchies.

But, still, even these begin to buckle under more advanced typing concepts... e.g. when we start to add behavioral types to objects, composite objects with emergent properties, and more. Every orthogonal property grows the number of types needed by a factor of 2 or more... and you can only do so much to keep up... especially when you really do need lists of 'red things' and to vary your own behavior based upon what the 'thing' is.

For problem domains where exceptions to hierarchies are the norm, and for problem domains where a large number of different properties are used to classify objects for different uses, tree-based typing hierarchies are awful. A better approach is a tagging-objects-with-property-facades (type by tag) or, better yet, using a compiler that can classify both values and objects with which it works both at runtime and (where possible) at compile-time and automatically categorize and determine behavior (e.g. via dispatch) based on -arbitrary- type/pattern criterion that is chosen (in code) at the call-site.

Looking at the examples used for those design patterns brings up another issue... the sin of placing in objects those behaviors that belong to some external actor! Triangles don't draw themselves; they are drawn by an artist that uses its understanding of the triangle to draw it. Values don't serialize themselves; they are serialized by an actor in accordance to some sort of encodec. Those sorts of failures cripple flexibility. E.g. a '.css' for 3D should be casually possible, but when triangles draw themselves there is no 'artist' involved capable of imposing its own sense of style. This inversion on behavior description that I often see in today's coding examples is, however, a rant I should save for some other day and some other page.


See also: SetsAndPolymorphism, AreTypesTiedToSyntax


EditText of this page (last edited July 9, 2010) or FindPage with title or text search