There Are No Types

Issues questioning the value or usefulness of "types" or "subtypes"


Note: this topic was not created by TopMind, despite it's resemblance to similar topics of my involvement. Nor do I claim ThereAreNoTypes. If anything, I'd claim they are "relative", being of the EverythingIsRelative tilt. They are a UsefulLie in specific situations. --top


...only objects with their properties. (or not... see BundleSubstanceMismatch)

This is partially why discussions about WhatAreTypes seem circular to AlistairCockburn.

In PrototypeTheory?, there is the concept of prototypes, i.e. objects near the centroid of their cluster... best example objects. Ask a person for an example of a number and chances are, they will give you some small integer or a well-known constant like pi. You will hardly ever get 1.29743e-13 as an answer.

This relates to conceptual categories. It could be very useful in a programming language - especially since many prototypes such as the primary colors seem to be the same in any culture. It is not an alternate view of traditional category theory, but a refinement of it. A category places a circle around all objects in it and a prototype puts a target in the center.


This reminds me of InevitableIllusions when it talks about why there appears to be agreement about what is a typical bird.

In verbal conversations that may be because one can interactively discuss the border cases before making a decision. However, software is often too discrete to pull this off reliably. See also LimitsOfHierarchiesInBiology.


I think this discussion stems from too narrow definition of "type". It is true that most mainstream programming languages use the word "type" in a very specific way, but in its heart I think "type" means a set of provided functionality. For example, every object has a type, which is the set of operations you can perform on it. This is not a concept PrototypeTheory? would change.

Does every object have a type? or did you mean "class"? Does every class have a type? Does every object have a class? Confused yet? I thought so. Where is the rigorous definition of object orientation which clearly distinguishes between objects, classes, and types? You won't find it anywhere, sadly. It's awfully confusing.


It seems unfair to me to bash the type theorists for being disconnected from reality and then saying that prototypes are the only way to go. They're an interesting way to go, especially for certain problems, but there is surely a practical truth that some objects in the world are related by something like type. For example, by definition, any (not-forged) $10 bill is worth as much as any other. You _could_ go around checking each bill in your wallet for its value attributes but, as pointed out elsewhere, LifesJustTooShort. -- SteveFreeman

Prototypes are just a convenient form of inheritance. Prototype languages can even be statically type-checked, as in CecilLanguage.

That's a different meaning of "prototype".


Whether types exist or not in reality, in programming they can be thought of as a sensible form of DesignByContract. The type can be considered a contract that certain methods and members exist. One can code without classes, assuming a message-oriented oop environment where the messages a given object has are defined on a per-object basis... but then one must check for the presence of each message handler, and the implementation of each message can be thought of as a separate "type" - the IsA relationship becomes Is-a-runnable or is-an-addable instead of is-a-function or is-a-number. At some level an interface must exist, and that interface is the "type" of the object. -- MartinZarate

Is the interface the type of the object, or is it the class of the object? Isn't there some confusion in Object Orientation what a class and what an object actually is? There is some overlapping between "class" and "type". If a variable that is set to 567 is of the type integer, is not an object of some type too? or is it of a class? Or is the class of some type? Or is type and class so similar that someone needs to stop inventing new terminology? Is a class just a special struct or record? Really it is. Some think that object orientation is just structured programming with extensions added, but instead of calling it structured programming, they renamed it to object orientation due to ego and arrogance.


The way that I see it, PrototypeTheory? really points out some of the problems that we have in OO; I don't think it solves them, or at least I haven't seen that yet. If some instances are more prototypical than others.. say 2 is a better example of a number than 1.823e-12 then what does that say about how people naturally think? Your notion of a game may be different from mine. Depending upon our particular problems, we could code up two completely different game classes.

Software depends upon expectations at interfaces. In OO, we create concepts and ascribe properties to them. We create taxonomies and ontologies and we often try to find the "right" ones. But, regardless of what either of us may think that a concept means, the truth is in the set of expectations that our objects have. I wonder sometimes whether we are not better off thinking about sets of properties or capabilities rather than objects in the beginning.

I have been wondering what set-oriented inheritance would look like in OO instead of tree-based inheritance. The problem is that it keeps looking like relational theory, but missing behavior. Maybe I am just a RelationalWeenie at heart, or I can't "solve" it because too much prior exposure to relational is biasing the outcome. Multiple inheritance is sort of set-based, but it does not handle zero or multiple "matches" very well that I can find.

It certainly makes sense from a reusability viewpoint. Would you rather make a mailman object in your system, or would you just be happy to find a component someplace that Can_Deliver_Mail? It doesn't matter what else the component does as long as it Can_Deliver_Mail. Sounds very utilitarian, doesn't it?

-- MichaelFeathers

Unfortunately, the real world often does not allow one to make sweeping declarations like that. What if something can only deliver mail on weekends or only deliver packages lighter than 2 pounds? An "Enalogy": An e-mail system may not allow attachments for security reasons, for example. Thus "can_accept_email" is too large a granularity.


I do often get unhappy with type theorists for portraying that they are holding onto some Truth - but that wouldn't mean I throw in with prototyping theory.

The thing to remember is that Types are our attempt to project some arbitrary, would-be desirable, properties onto the world. Types don't have a pre-existence. We want something, we try to say what we want, we give a name to the search, then we haggle over what we really want, all along looking for the consequences of stating what we want as we have just stated it.

So there are no Types, but there is something we think we want - see WingOnTypes. And we have given the words Type, Subtype, AlloMorphism, to hold onto what we think we want. One difference between AlloMorphism and Subtype is that Allomorphism is exactly defined to mean "replaceable with", whatever the consequences are, while Subtype is still searching for a definition that matches what we think we want. -- AlistairCockburn


Liskov talked about substitutability over all programs. That makes reasoning easier, but in the real world we only care about substitutability in specific situations. For example, if I need a dollar bill to feed into a vending machine I'll want a clean, crisp, new one which the machine can read. Old, crumpled ones may get rejected. Usually, though, when paying human shop-keepers, that difference doesn't matter.

The problem with a static type system is that it tends to force us to distinguish crisp dollar bills from crumpled ones through-out the system, even if only one part of the system cares. This can be tolerable for small, stable systems but it doesn't scale well. Something like types exist, but they are local and subjective, not universal truths. -- DaveHarris


I would like to know if anybody has comments about this text snippet from comp.object:

  (quote)

Data types can be of an arbitrary complexity, but in usual biz systems you will need few complex types. It is one of the reasons of the SQL success in the biz camp.

Examples of complex types for biz systems could be:

Picture FingerPrint? VideoClip? AudioClip? etc.

We could create a table like this:

create table employee(FP FingerPrint?, Name Char, Primary Key(FP));

(end quote)

It is just as easy to create a FingerPrintID (Int) rather than invent "base" types willy nilly. I don't really trust most developers to create good "solid" base types anyhow.

The ID allows you to put the actual fingerprint just about anywhere (as long as it knows what its ID is). It could be in the database, in file systems, etc. Complex or custom "types" reduces the ability to share information IMO.

If you want validation rules, then add validation rules.

Besides, "picture" and "fingerPrint" are not necessarily mutually exclusive. This turns into the whole "taxonomies are relative" debate thing again, and probably leads to a circle/ellipse kind of debate.

HAS-A is more adaptable. Types are IS-A, and that is a bad thing IMO (pun).


Critiques of entity sub-typing:

http://www.geocities.com/tablizer/change.htm#people (People taxonomies: a thought experiment)

http://www.geocities.com/tablizer/subtypes.htm


Try telling a mathematician there are no integers, real numbers, imaginary numbers, he will have difficulties in proceeding with his work. A type tells a programmer that an item has a set of properties that will be consistent. Just as imaginary numbers are useful in mathematics although they do not exist in the real world, types are useful in our world although we find that they may not match to the reality. -- VhIndukumar

Concur. This is why we have "definitions" for anything - we need to agree on the fixed, permanent, everlasting properties of an entity before we can work with it. Just saying that "everything is an object with certain properties" tells me diddly-squat. Actually, it tells me that I ain't gonna get nuthin' done here. -- MartySchrader

But integers are a subset of real numbers, and real numbers and imaginary numbers are both subsets of complex numbers. (Somewhat more technically, the integers can be embedded into the rational numbers by identifying the integer n with the rational number n/1; the rational numbers can be embedded into the real numbers by identifying the rational number r with the associated DedekindCut, etc. The notions integer, rational number, real number and complex number describe different concepts, where higher order concepts are constructed from lower order concepts by the means of different notions of closure.)

"Integer" can be viewed as a property of a number instance rather than a "type". We can view these all as set memberships, no? Set theory is cleaner than type theory as far as I know. Or, at least interchangeable. Plus, what works for base types may not necessarily extrapolate well to complex entity types. A "type of person" is useful in a narrow context only.

Perhaps "types" may be merely a construct for human communication, and not necessarily a reflection of some universal taxonomy. In that case, set-based approaches may be just as valid if some humans agree with or like that approach.

"Sets" are open-ended, are they not? Then how does one differentiate between a "set" and "type", if both concepts describe a collection meeting certain criteria? Why are we bothering to make these distinctions?

I suppose this gets into the sticky definition of "type". A collection? What about operations? I thought types usually also involve operations.

If you live in a typeless world, you have no operations. If you have no operations, you have no computation. Conclusion: you need at least one type.

Well, if we have one generic type called "thing", then what use is it as a definition or conceptual tool? It is about as useful as saying, "everything is an object". There is then nothing to compare and contrast. Operations could "probe" to see what features are available, I would note.

OK, so we need at least two types: Object (everything is an Object) and Bottom (the result of undefined operations). Is that better?

NullConsideredHarmful -- Maybe so. But that isn't what Bottom is about. Null is a value that can take any type (except Bottom). Bottom is a type with no values.

You also seem to be talking about fundemental building blocks. I suppose we can call these building blocks "types", but that is not a very useful definition IMO. "Types" usualy come into play in conversation when there is some sort of taxonomy being presented for the sake of discussion. IOW, a context-sensitive taxonomy.

A lattice of two types is of course inadequate for practical programming, but more then enough to debunk the notion that there are no types.

It is a kind of TuringTarpit issue. I don't dispute that many, perhaps all, things can be viewed or defined using "types" (if you work hard enough at it). But I question the usefulness of it for most things. Then again, there are several questions involved:

I care mostly about the useful bit because the the rest are probably merely endless LaynesLaw issues. -- top

Some of this can be answered objectively if one is careful. The question "are types useful?" seems the most subjective of the above, and you tend to feel they usually aren't, and by now you are aware that a large percentage of the world feels that they are. Yet has anything in any domain been proven to be useful to extreme skeptics? No example comes to mind. Usually the "proof" is considered finished if it's been done beyond reasonable doubt to a reasonable person -- but obviously you and your debating opponents disagree about "reasonable doubt" and "reasonable person". So skip that one until someone has a new idea about reasonableness that you'd agree to.

Next, "do types exist?" - depends on what you mean by "exist", which is a nontrivial issue. No abstractions have physical existence, although of course we label the physical world with abstractions. The more common meaning of "exist" for abstractions is the mathematical meaning, which is essentially "can it be formally proven to be consistent to assume", and in that sense, types have been proven to exist in mathematical systems. Most other meanings of "exist" are too informal and idiosyncratic to say much about.

"Can everything be defined as types" - it can be done, trivially. All of the issues here are pretty much identical to those with "exist", with the addition of the implied question "does doing so allow accurate prediction of the system in question?" -- the answer to which depends on how things were defined as types.

Finally, "Does defining something as types preclude other EverythingIsa's" - that's the easiest one. No, it does not preclude that. -- Doug

Your added question, "[if types are useful], are simple/base types more useful than complex/user-defined types?" - I think this is straightforward. If we look at languages which are in some sense typeless (e.g. by Cardelli's definition, which equates singly-typed and dynamically typed with typeless), like assembler, BCPL, B, and Lisp, the first thing users start to do is creating their own types (such as structures/records etc) beyond the single type defined by the system. So this seems clear: user-defined types seem more useful than base types. -- Doug

This seems to echo the EverythingIsRelative view found in SoftwareGivesUsGodLikePowers. "Types" can be used to model lots of things and also produce the right answers. But, whether it is the ideal model is the sticking point. (moved, because it was after the "exists" paragraph, which wasn't fitting placement.)

That's true of all abstractions, not just types. But remember that, regarding EverythingIsRelative, that includes things being relative; there are limitations to the principle, it's not an absolute, and I'm not making the old stale joke, I mean, that's really true.

But before even beginning to consider whether types are an "ideal model", I'd want to see a proposed list of alternatives (if there are no alternatives, then yes, something is ideal!); was there one that I overlooked already on this page?


There are types, but they aren't discrete. They are ContinuousTypes?. My two year old daughter can quickly categorize new plants that fall somewhere along the spectrum between tree and bush. She started creating names for types almost as soon as she could speak ("What's that?"). I assume her brain started creating ContinuousTypes? some time before that.

That sounds like attributes like plantVolume or perhaps treeNess and bushNess. Whether treeNess is counter to bushNess would probably take a biologist to answer. There might be big bushes and small trees that people will argue over unless there is some descrete genetic test or something. Also, what may be 90% useful for everyday speech is often not precise enough to use for the computerization of something.

My point is that we don't argue about these things because we can easily place them along multiple continuous type axes. I agree this isn't precise enough for current software development tools, but since it works so well with brain software I have to conclude our current tools demand precision they don't really need.

It appears to me that we have potentially multiple ways to classify things. From certain persectives or issues, people will tend to settle on a mutual classification. A smog check company will classify cars as "smoggy" or "non-smoggy" (based on gov requirements). However, that is hardly the only way to divide up cars. Your date might instead divide cars into slick cool cars and cars of loosers. (I suppose a smoggy car is less likely to be called "slick", but not all pollutants are visible to the eye or nose I would note.) If there is an agreed-upon one-size-fits-all taxonomy on something, than formal typing seems to work reasonably well. But if not, then formal typing backfires over the longer run IMO. It is then no longer a UsefulLie. The more complex the thing, more likely formal typing is to go wrong.

[[Isn't smoggy just a property of a car, and not a class? Wouldn't you define the car interface to specify that it has a gas pedal, a brake pedal, an engine object inside the car object, etc? The problem with using analogies like applying smog to programming, is that these analogies are often poor and often fail. I don't think classifying a car as smoggy is anywhere close to type systems or classes in programming. Smoggy cars are more like error checking. If the car is smoggy, process the car this way. If the function has a return value of smoggy error, then do this and that.]]

Still, one needs to be able to typify things by group. So, a Class A driver's license isn't the same as a Class C isn't a Class M. That way you don't have unqualified people driving cars and delivery trucks and motorcycles just because they have some sort of license. If you can't type things by the facet you need to identify then you can't do anything useful with them.

Bear in mind that just because we identify a particular person as having a Class A license doesn't preclude him from having a Class C or a Class M simultaneously. (There are a bunch of relational data pages, somebody fill in some links, please?) It just means that we are separating this guy from that guy based on the one having a Class C and the other one not. Therefore, the typing of LicenseClassCategory or whatever is valid in this context, and shouldn't be dismissed as useless just because we have requirements for a VisualAcuityCategory of the same subjects in the very next operation.

I view those as "sets", not as "types". Somebody belongs to set M or not. Set M may not be mutually exclusive from another set. A truck driver can also be licensed for motorcycles. Whether types can be built with sets or not is controversial and depends heavily on one's definition of "type".

I am curious. If a person has multiple "classes of licenses", do they receive a drivers license card for each class, or one card that may say something like "Classes: A,M,Q". Keep in mind that government agencies are famous for duplication and archaic practices. I am not saying that this is necessarily an example, just that modeling what the government does is not necessarily the same as modeling what the government should do. To me, it makes more sense to just have one card per person. However, the different permissions may expire at different times, which may require a new card printed for every change in the class list. Whether that is better (less printing) than multiple cards or not would probably require statistical research.


Types (and type systems) are MODELS for the real world, little else. Like all other models, types and type systems are imperfect approximations of reality; whether it is approxmating the set of integers with the set of integers in the range [-(2^31) - 2^31); representing a human being with an Employee class in some business logic class hierarchy, or representing colors with a set of enums; types miss a lot of details. And with good reason; computers lack sufficient memory to store infinitely large numbers, there are some things which we have insufficient knowledge of to model accurately, and there are some things (many complex continuous processes) that simply cannot be modeled with discrete values changing at discrete moments in time.

Should we abandon the concept of type because it is imperfect? Of course not; type systems clearly have their place in software engineering.

Amen to that! should we abandon types and just implement more unit tests? seems like a step backwards!

On the other side of the coin, should more "flexible" systems like Lisp (which leave typing as an exercise for the programmer; to make as flexible or as rigid as he/she likes) be condemned? Of course not, as well. Such systems also have their place in computer science.

In an ideal type system, perhaps ThereAreNoTypes would be true, as every (or almost every) object in the universe is unique when you get right down to it. No two blades of grass in my lawn are the same. And types are most useful when they allow two or more (physical, not OO) objects or concepts to be held equivalent in some fashion. But if there are no types, then there really are n types (or n-k for some small k) where n is the number of objects in the system.

Types, like models, allow us to grok the details of things or ideas. Like models, they can be tailored to fit the application (is it sufficient to define a set of colors as enums; or should we use RGB triples?). As we discover more detail about a system, we likewise can expand the model. The unfortunate part is, of course, that existing code might break if we make too many assumptions about a simple model before replacing it with a complicated one.

In short, in the real world, there are no types. In the virtual world where everything is a model, there certainly are.

[[In the real world there are no types? So the type of car you buy which has certain properties, specifications, and methods (how fast it goes around corners) is not a type system? Of course there are types in the real world. If you just think everything in the real world is a bunch of electrons mashing at each other, how does that get you anywhere? Aren't humans a TYPE of animal? Doesn't a cat type differ from a dog type?]]

There are other ways to model structures and organizations besides types.

Definitely. And in many cases, types (however you define them) are a great way to do it. Especially the things that can be described precisely, such as the set of boolean values or the set of integers. We all can observe that 5, -3, 1000413412, and 0 are integers; 3.14, -2.2, and the color "red" are NOT integers.

As others have pointed out (WhatAreTypes), there are many things in RealLife that aren't typable (at least not without some controversy, ambiguity, and/or arbitrary declarations). Is a plant a tree or a shrub? If we in the real world cannot decide such things, then how can we model them in the computer? The best we can do is to approximate them.

Certainly, type systems (or at least those imposed by insufficiently flexible languages) don't always provide a good framework for modeling real world problems. But in many cases they do. Given the benefits accrued by languages with such typing systems (especially for problem domains that are well-studied), I see no reason to abandon the concept of type. (Conversely, I see no reason to impose type systems on all languages...)

This sounds like it may be leaning toward the age-old DynamicVersusStaticTyping? debate (such as SmallTalk vs Java).


Base-2 verus Base-10 "Types"

And with good reason; computers lack sufficient memory to store infinitely large numbers

There are number handling systems which use string-like variables/objects for different sized numbers, usually with an upper limit set somewhere. They tend to use base 10 instead of base 2 for calculations to reflect the way managers and customers conceive math and rounding.

CobolLanguage, for example.

Cobol generally requires pre-determined sizes, at least it did.

Msny languages, it seems, have a BigNum or equivalent, which uses variable-size encodings to allow arbitrarily-large numbers to be stored. However, there is still an upper bound on the quantity that can be represented; namely the computer's memory. Even if I got 4Gb of memory and can use all of it for a single BigNum; that only gives me 2^(8*2^32) different possibilities.... assuming a particularly efficient implementation of BigNum. (Only??)

So... when will we MS Visual Cobol?

{If you are calculating pi to the zillion's decimal position, then you will need special software anyhow. The above precision is probably good enough for the vast majority of apps.}

RDBMS that have a "decimal" type don't really need an "integer" type. An integer is a decimal number with zero decimals. It is there for human convenience.

Related: FloatingPointCurrency


Types are useful in modeling abstractions. Of course, the untyped lambda calculus is enough to express an arbitrary computation. Similarly, all mathematical objects can be built as complex, untyped nested hierarchies of sets built upon the empty set as in Zermelo-Fraenkel set theory. However, one usually likes to think of mathematical or computational objects such as the natural numbers as abstractions independent of the accidental details of their construction. Indeed, there are an infinite number of ways of constructing "the" natural numbers in set theory or computationally - in fact we cannot even speak of *the* natural numbers until we have decided to safely hide or ignore the details of the construction. Types, whether they are enforced by the language (as in Java or Haskell) or in the programmer's head (as in Smalltalk or Lisp) or formalized in the language of category theory, abstract away these nonessential constructional details.

Perhaps they arguabley work for base building blocks like numbers and strings, but that does not necessarily mean they scale up to things like employees, customers, etc. If so, an example would be nice. Hopefully not EmployeeTypes again.


More on Numbers and Types

If ThereAreNoTypes then there wouldn't be anything to dispatch on, so obviously ThereAreTypes??. If ThereAreNoTypes then what the hell are 'string','int','double','decimal' etc... obviously ThereAreTypes??!

"Int" is just a number that HAS-A constraint of no decimals/fractions. In fact, all of those can be seen as constraints on "string". "Types" are a mental viewpoint, not an absolute concept. Sure, you can view everything as types, but you can also view everything as constraints. You can also view them through SetTheory. Integers and decimals are a subset of all real numbers. Integer is a subset of decimal (decimal with zero decimal places). Other potentially useful subsets are positive numbers, positive integers, etc. True, in many computer systems integers are limited to a range, but this is mostly an implementation issue.

This is just wrong; it has wandered into a mature area of mathematics called ModelTheory where we can talk about proven results, unlike the vague opinions that dominate the rest of this page.

Proven from a practical standpoint, or just that things *can* be represented as types? These are not the same thing. The issue is the usefulness of modeling things as types, not the mere possibility. The second is not in question. TuringTarpit

You are talking about modeling an Int as a String or a Set, whether you realized it or not. It is inaccurate to say that a model of a thing IS-A thing, and your HAS-A is also technically flawed: Ints consist of far more than just a constraint that they are not fractions. What has turned out to work mathematically is to make positive statements about entities. A complex number is an element of a closed commutative division algebra, for instance, and you can make similar positive statements about other kinds of numbers.

Similarly DedekindCuts? were invented to attempt to rigorously define exactly what real numbers are, but alas, they turn out to be just another model of real numbers.

So thanks to the amazingly counter-intuitive Lowenheim-Skolem theorem, you can't say, "ah, now we're done, I've rigorously defined something that IS-A complex number, that specifies what I intuitively mean and nothing else" - because there are an infinite number of things that model any such rigorous definition, and they have additional properties you certainly did not intend, that your intuitive notion did not include, and it has been rigorously proven that there is no escape from this.

Actually, it is perfectly possible to define categorical formalizations, i.e. formal systems that have only one model up to isomorphism, for essentially all interesting structures in mathematics. The Loewenheim-Skolem theorem and its generalisations only apply to formal systems with axioms or axiom schema that generate countably many first-order propositions.

OTOH, ZFC (as an extension of first-order logic), is such a system. Skolem certainly viewed this as a serious problem - see "Some remarks on axiomatized set theory" (1922), in ISBN 0674324498 . It's quite odd, therefore, that it is this version of axiomatic set theory, to which Skolem's critique applies in full force, that is universally taught in mathematics degree courses. Because of this, infinitesimals, for instance, have reappeared in ComplexAnalysis, after being banished for a century; the Lowenheim-Skolem theorem means that any model of the real numbers or complex numbers that you try to create will also include a nice model of infinitesimals, which cannot be banished by improving the model (although it takes something like a Comprehension axiom to make them "visible" in a certain precise sense, rather than lurking unseen; this is what NonstandardAnalysis? is about).

In a rigorous sense, all you can do is create formal models for intuitions, and just live with the limitation that those models have isomorphisms, but not identities, with intuitions. IS-A does not work, rigorously.

Perhaps this doesn't matter to most people, who don't care about mathematical rigour...hey, two's complement arithmetic has a most-negative integer for which -A = A, but who cares? Still, if we need to make precise statements, the math has already developed to the point where we are able to.

Well, I admit that I am a bit overwhelmed by your vocabulary here. Anyhow, how does this relate to my statement that there are multiple ways to model "integer" in a practical sense? Are you saying that a constraint-based viewpoint is not mathematically valid, or not useful? I have worked with languages that had no "integer" type. If you wanted to "enforce" integers, you defined them as a decimal with zero decimal places. A decimal with 2 decimal places is no more "special" than a decimal with zero places to it. It is that simple. Maybe there is a "constraint theory" that says the same thing as your referenced theory. I suspect constraints are a superset, or perhaps alternative way to look at or implement/define "types". Also, theory that works well for rigid things like numbers and geometry may not apply to things with dynamic, fuzzy, relative, or subjective boundaries such as "people types", "customer types", "product types", etc. [Consolidate this with ExtrapolatingMathToHumanConcepts perhaps.] God does not change the laws of geometry very often the last time I checked. However, God does not define many of the domain nouns we use in applications. Domain nouns seem to be the biggest area of contention with regard to "subtypes". I would like to see theory successfully solve such issues. I doubt all relevant attributes fall nicely into a subtype tree. Yes, a tree is *possible*, but not sufficiently useful.


When looking at "types", sometimes one needs to distinquish between "root types" and "sub-types". Sub-types generally involve a hierarchy of some sort with a "generic" or "standard" or "template" one at top. Variations on that top-level theme become the sub-types. For example, "person" may be the root type. But, this can be sub-typed into "customers", "employees", etc. (This is just an illustrative example. In practice hierarchical taxonomies of people don't sit very well. See ContactAndAddressModels.) Complaints about modeling with root types and complaints about sub-types might be at different levels. Sometimes root-level types are called "entities".

I consider sub-types to be properties. The word "properties" to me is just an easier way to visualize a "sub-type". We know of properties like "that car is red" or "that widget is red". Its property is of red. It's harder to say "its sub-type is of red" because "property" is something we know of in the real world - but we don't speak of sub-types in the real world as much. I suppose this idea of properties is arguing for more real world like language design - I think it has its place. With database intense applications, and things like file systems, it doesn't as much want to be about properties or objects, since databases are about data-storage and not about widgets and objects. I think a "property of a type" or "property of an object" is easier terminoligy to remember for OO applications than "root-types" and "sub-types" and "sub-sub-types" are. In the case of widgets on a screen, they are usually similar to real world things in some ways.. i.e. boxes, buttons, so I think this is a good time to use properties. Maybe regarding more mathematical or scientific programs, properities, and easier to speak object terminoligy might not need to be used as often, or have as much of an effect. I think the "type" of car is a 2 door and the "property" of the car is red. The same goes for a widget. Then again, it could have a sub type.. such as a 2-door with hatchback or a 2-door with trunk. Hmm..


A type is an algebra: a set of possible values and a set of applicable operations. In practice we deal with heterogenous algebras: many sets of possible values, and operations applicable to some values from different sets at once. Even in a prototype based system there are certainly operations which are only applicable to a subset of all objects. Therefore there damn well are types, at least in the head of the programmer, who will avoid invoking methods on objects where they don't make sense.

Another debate is whether these types need an explicit expression in code. I think so, b/c I like it if the compiler reports a type mismatch. Any type mismatch (assuming a powerful type system, like Hindley-Milner) is an error anyway: as said above, a type mismatch is the attempt to invoke a nonsensical method.

But that is a rather weak definition of "types". It is basically restated use of set theory: which verbs are allowed on which nouns? Are types nothing but a many-to-many association table? Also, if this "list" is only in the programmer's mind, then it is hardly a concrete concept. It is basically a restatement of, "Gee, some operations are not appropriate in some cases", which is an obvious truism that you cannot do much with beyond recognize it as a fact of life. Anything you can view as a "type", I can probably rework it to view it as a constraint (see above). Sounds like just another TuringComplete-interchangeable-like definition battle brewing.

In languages like SmallTalk one can try to make an object dynamically handle non-answered messages by trying or suggesting other approaches to the user/caller. It essentially then can be viewed as a search algorithm also. One can compare it to searching for a loose brick or bar in a jail cell; a search for which jail cell object answers "loose". We could call this a "typing probe" perhaps. -- top

For lack of a more precise one I'll stick with the "weak" definition. It's at the core of every more advanced type system. You can add polymorphism, algebraic types, dependent types or what not; types still describe which operations apply to which data.

Even in SmallTalk, where you play games with DoesNotUnderstand you have objects, for which some methods are just nonsensical. Calling them would be a mistake. Do we agree that types in this sense do exist?

Well, that is kind of an open-ended truism. Some operators are not going to like some operands. That is not news to anybody. At a young age you find out you can't play Game-Boy cartridges in the VCR (unless polymorphism re-defines the operation as "jamming and crushing"). In static systems this is generally determined at compile time and in dynamic systems it is determined at run time. The dynamic version does not differ from validation that I can see. Is typing just another way to say "validation"?

So ThereAreTypes? after all.

If the definition is wide enough, such that it includes things such as validation.

Do you need to validate whether or not a certain type of car is different than another type of car? It depends on the situation. Validation is a separate activity than declaring the type. Declaring what type of car it is, and how fast it goes around corners, doesn't necessarily include validation. You can declare a certain car has certain properties, before actually implementing the car, and before actually comparing it to other cars. There are different types of fruits that have different properties. When you validate that an orange is different than an apple, the validation is not a type system, the validation is a process that you use to verify the types. It sounds like someone doesn't understand type systems, so they like to push types under the rug and call it something else instead: types are just validation, or types are just flags, or types are just colors, or types are just pylons, or types are just labels, or types are just stickers, or types are just road signs. It is childish.

What exactly is "childish"? They are interchangeable such that "types" can be modeled as validation and/or attributes, and vice verse. There are trade-offs to doing it each way. One must use their skills and domain knowledge to make the best choice. In my opinion, it's utilitarian exercise of weighing the trade-offs based on estimated ChangePatterns and also partly in fitting the customer/user's WetWare view of the domain. I see it merely as an economic calculation, not an endeavor to classify the universe "correctly" (and each domain may have a different perspective on the same given object).

For example, a design decision may be whether to create two separate entities and/or types "cars" and "trucks", or just have a "vehicle" object/entity with a car/truck attribute. The decision should be based on how likely the laws are to change, the impact of shuffling schemas or class designs around if the criteria and attributes change or hop entities, how the customer/user views them, etc. It's not a matter of which approach is "right" in an absolute sense, it's a matter of making the software flexible (requiring fewer resources to change), yet easy for the user to use and learn. My job is not to care what the universe thinks, but to help the customer be more profitable. That's the difference between professionalism and MentalMasturbation. Probability trees of the sort found in DecisionMathAndYagni are a very useful tool to start with. -t


Returning to Street Definition

The intersection of operators and operands (above) is kind of too low-level a definition to be very useful. Perhaps we should revisit the street definition: a taxonomy, often situational rather than a global taxonomy. "This type of paper...", "Those type of people...", "I don't like mechanical mouses, I like the optical type...", etc.

Well then, what is the street definition?

A situational taxonomy. And, I believe that static typing leads to complexity scaling problems because it depends on essentially global taxonomies to work effectively, which is difficult in practice.

[None of the types you identify are "situational" taxonomies; a particular type of paper - say, glossy - won't become glossy to satisfy a situation that demands glossy paper, nor will it cease to be glossy paper when in other situations. Optical mice and mechanical mice are subtypes of mouse, globally, and a given mouse instance will always be optical or always be mechanical, rather than changing dependent on situation. "those type of people" is perhaps questionable, as often this phrasing means "people with behaviour or property X", and people's behaviour is typically not constant; however, typically it is trends in behaviour being identified rather than instantaneous actions, which can easily be established globally rather than situationally. -DavidMcLean?]

If you leave glossy paper on your sunny dashboard too long, it could become non-glossy, and is thus not a permanent trait. But even if it were, is "permanence" the key here? Mutable traits are "traits" and non-mutable traits are "types"? I'm not sure that fits the common notions of "type". Plus, it makes the common term "dynamic type" an oxymoron.

[Under the strictest definitions of "type" (e.g., a type is an invariant set of values with zero or more associated operators), mutability is simply not permitted, by virtue of the fact that strictly a "value" is an immutable instance of a type. However, in practice there is no fundamental problem with associating types with mutable aspects of an object, provided that the aspects in question are sufficiently invariant that the type won't "vanish" out from under you - PredicateClasses are an approach to typing objects based on their potentially mutable properties. Critically, however, whether or not an object is an instance of a type is determined solely by its properties and never by the situation: Glossy paper is glossy even if you don't need it to be, and it only stops being glossy if you change it. Therefore, the taxonomy itself is global. There is no oxymoron in "dynamic type", which typically indicates that variables and parameters demand no particular type of their contents and are free to contain variously-typed values over their lifetime, values' types are determined and enforced at runtime rather than during a compilation step, and that failure to provide an appropriately-typed value will result in an exception of some kind. (Although of course things can vary, obviously.) Although eternal immutability is not a prerequisite of typing, values' types remain immutable under usual definitions of DynamicTyping - a particular value is always an integer or always an array, even though both might end up being contained in the same variable. (Note that viewing dynamic type systems in terms of DuckTyping is often more useful; for example, one might suggest a method's parameter must be of type "the type of object that implements method foo(int): int". Again, this condition relies only on the object's properties (it either does have an appropriate method foo(), or it does not) rather than on the situations in which the object is used.) -DavidMcLean?]

Such "values" don't even have to exist. But this gets into the long and nasty ValueExistenceProof war. One CAN model typical program "actions" with a notion of immutable values, but it's one of many possible observation-matching models. -t

[Values "don't even have to exist"? Huh? Values clearly exist, immutable and mutable (even if the latter is something of an abuse of terminology - actually, it'd be better to describe an object as an immutable value that contains mutable variables); heck, even the most impure of languages have immutable value types such as integers. ValueObjects are commonplace, and then there are pure languages which solely provide immutable values. Still, the existence of values is inconsequential to the point, that being that a thing's type or types are determined by that thing's properties, rather than by situational factors. -DavidMcLean?]

Values may or may not exist, but an exact definition remains elusive unless one promotes their personal model to the center of the universe.

How would, say, a simple calculator application work without values?

That depends on how one defines "values" and how one models the calculator. And I generally don't claim that "values don't exist", only that the term varies widely in perception, usage, and application per individual and/or situation. The topic ValueExistenceProof is perhaps poorly named.

What do you mean by "I generally don't claim that 'values don't exist' ..." Above, you wrote "[s]uch 'values' don't even have to exist." Do you mean that under some particular definition of values, they don't have to exist? If so, what definition of values are you referring to? The description that DavidMcLean? gave, above, is reasonable, recognised, and generally accepted.

I don't see where David clearly defined "value". And we are getting off-topic. If you want start another topic on "values", be my guest. The issue of "value existence" originally came up when I realized that a typical dynamic programming language does not have to be defined nor described in terms of "values". One can describe the language without any mention of "value" without significant loss of comprehension for the reader. "Values" are not a necessary part of typical programming languages (at least dynamic ones). Specific models may choose to use or define something called "value", but that's not a mandatory implementation and/or modeling requirement. -t

DavidMcLean? wrote that "a type is an invariant set of values with zero or more associated operators", which is a popular definition of "type" that implies certain definitions of "values" and "operators". Whilst you can (awkwardly) "describe the language without any mention of 'value'", that doesn't mean values don't exist. Values most certainly are a necessary part of typical programming languages, including dynamic ones. If you've left out mention of "values", it merely means you've avoided speaking about them. An expression like "a + 3" evaluates to something, and the "+" operator adds (or concatenates, or does whatever the language defines "+" to be) two somethings. What are those somethings? You can avoid calling them "values" by calling them "results" or "operands" or whatever, but they're clearly those "values" mentioned in "a type is an invariant set of values with zero or more associated operators".

Anything in the universe can be described that way which makes it a tautology. It's too open-ended by itself to make it useful. And how does "clearly" follow? Where is your Clear-O-Meter? And dynamic languages can have weak associations between "values" and operators. And "associated" is also open ended.

[Do you mean to claim that anything in the universe can be described as "an invariant set of values with zero or more associated operators"? What makes you think that could possibly be accurate? There are many things that are not an invariant set of values with zero or more associated operators; for one trivial example, the number 5 is not a set and therefore cannot be an invariant set of values. Whether an association is "weak" is irrelevant, provided it's present at all, and association has been specified all over this wiki: "X is associated with Y" means that "given X, we can answer questions about Y". In the case of types, it means that given a type, we can answer questions about its operators, such as "what operators will accept values of this type?". -DavidMcLean?]

5 can be viewed as a set of one. Nothing stops one from doing such except maybe modeling convenience, or lack of. And "operators" can be open-ended. For example, take a language that blurs the difference between a user-defined "operator" and built-in ones. The total set of operators can be open-ended. Does this mean the "types" change every time another function is added?

The only (or closest) definition of a language is it's IoProfile: symbols in, symbols out. Any classification of these symbols or models about what happens "in between" is arbitrary (within the ability to transform-to/predict IoProfile). -t

["5 can be viewed as a set of one" in a sense - you can define a type as "the set of numbers that are five" and the operators associated with that type, for example, although since such a type has only one member it's equivalent to the "unit" type - but the number itself is not a set. As a type is defined as a set of values plus associated operators, adding a new operator associated with a type, which is possible in any language, does change that type. Is there a problem with that? -DavidMcLean?]

It's still too open ended even if we include quibbling about singles. You claim it changes the type, but most would consider that silly. Just because I create a new function in an app that takes an integer as a parameter does not mean I am "changing the type 'integer'" to most people.

[Quibbling about singles? Do you mean demonstrating that not everything in the universe can be correctly viewed as types, by counterexample? Note that "singles" are not the only counterexample; the array [4, 2, 3, 1], for example, is also not a set and therefore cannot be a type. As for changing a type being "silly", sure, maybe it is. Many things in computer science and mathematics are silly when considered purely from a commonsense point of view. Again, is there a problem with that? -DavidMcLean?]

Arrays, including ordering, can be represented as sets. See above regarding "silly".

[Sure, anything at all can be represented as sets, but that doesn't imply that everything is sets, for much the same reasons TuringEquivalency doesn't imply that function definitions and "for" loops are the same thing. Complaining that aspects of a definition are silly from a practical or commonsense perspective doesn't make those aspects any less accurate, you'll find; if being silly made something false, mathematics simply wouldn't have imaginary numbers, differently-sized infinities, the Banach-Tarski theorem, and so on. -DavidMcLean?]

What's the exact difference between being "is" and "represented as"? And we are not necessarily limited definitions to the field of mathematics.

["X is Y" implies equality (or at least equivalency) and is expected to be commutative. "X represented as Y" implies only that Y may be used to encode X. An array representation as a carefully-constructed set is no more an array than the string "123" is the number 123. Since an array, as you pointed out, must have certain properties (ordering, duplicates being permitted) that a set does not, claiming an equivalency between the two is misleading, despite it being absolutely possible to construct a set that represents an ordered collection. -DavidMcLean?]

Almost all symbol sets require a translation layer/step before they "are" something to our mind or machine. I don't know of a way to formulate your statement into a test for is-ness without making rather arbitrary assumptions about the translation layer/step. I invite you to produce an objective and "real world" meaningful test for "is an array".

[Why would you need to make any assumptions about the symbol-string translation layer? Look at most any language reference. Languages define a literal syntax for arrays, which usually looks something like [1, 2, 3] and denotes an array value with the same structure and content. Just as "5" is not the number five, "[1, 2, 3]" is not an array [1, 2, 3], but in both cases the string representation denotes the appropriate value unambiguously and in a well-defined fashion. If you want to test whether some value is an array, examining its properties as alluded to above is one approach: Is the value a collection of other values? Does it preserve order? Are duplicates permitted? If all three of those hold true, you definitively have an ordered collection rather than a set, even if that ordered collection is implemented using a set, since that would be of course an implementation detail. Another approach is, again, to look at the language reference: Under what circumstances does the documentation say arrays are created? If those circumstances apply, then you have an array. Either way is objective and meaningful. -DavidMcLean?]

Those often called literals, not necessarily values. A set-based construct can have all the properties of an array (an ordering, duplicates permitted, etc.) That alone is not a distinguishing factor. And the documentation calling something an "array" does not by itself preclude it from being something else also. Language designers could have called it a "Mipcrot". And arguably, "array" is an implementation detail.

[The sequence of characters "[1, 2, 3]" is a literal that denotes the array [1, 2, 3]. The array so denoted is not the same thing as the sequence of characters. As I just acknowledged, a set-based construct can have the properties of an array, because you can construct an ordered sequence in terms of sets, but again that doesn't imply that an array is a set; you can't insist an array is an appropriately-constructed set because that approach permits the construction of "malformed" arrays, with elements that don't have their order or their multiplicity appropriately encoded. It's irrelevant that language designers could have used a different word for the concept of an array - heck, language designers have done that, as in PythonLanguage which calls its fundamental ordered sequence type the "list" instead - since a "mipcrot" that functions as an ordered collection of values and permits duplicates is equivalent to an array. "array" would be an implementation detail if used in a context where it implies implementation, such as in C where arrays are contiguous in memory and must be distinguished from other ordered sequences such as linked lists, but the term as applied to typical imperative languages higher-level than C does not imply a memory layout or demand any particular implementation: It just means an ordered collection of values that may contain duplicates, regardless of how those properties are persisted by the system. -DavidMcLean?]

That seems like a classification system of "structures" (systems/objects/machines/tools/widgets which handle data). If the structure "system" allows X but does not allow Y, then we can classify it as a Z (map, array, list, bag, stack, queue, etc.). But being a Z does not preclude it from being an R or a Q. I still see no CLEAR CUT rules for distinguishing between "is a" and "equivalent to". And the boundaries/scope one uses to analyze may also affect the classification. What parts or behaviors do we count as one unit (structure/object/system, etc.) and which do we exclude? Strictly speaking, a "stack" doesn't allow one to peak at "bottom" data. But technically it's possible to hack into RAM and see the bottom data. Does its is-ness as a "stack" disappear as soon as we do a RAM dump? That's like quantum physics where the observer changes the nature of what's being observed. Classifications are in the head, UsefulLies for communicating, but the TheMapIsNotTheTerritory. Some of you seem to want to turn it into a canonized religion when it's really just nebulous or semi-arbitrary impressions based on historical examples and habits. -t

[What you refer to is a classification system of types, including collection types - it's understandable that they might appear to be "structures". You are correct that being a Z does not preclude a given value from being an R or a Q. A given value is only prevented from being an R (say, a set) if it fails to meet one of the properties necessary to be an R (say, if it preserves order). There is no conceptual requirement that all values have a single canonical type; this is merely a property of most popular imperative programming languages, for reasons of implementation convenience, performance efficiency, and ease of handling type-based dispatch (which could be significantly harder to understand and use effectively were arbitrarily many types associated with each value). In existing languages, values can be both Zs and Rs through subtyping. There is no need for a clear-cut rule to distinguish the phrases "is" and "is equivalent to", as I haven't predicated any argument on the distinction between those phrases. I believe you brought "is/equivalent to" up as as an issue because I described your "mipcrot" language feature as "equivalent to an array"; in this case, it would be equally accurate to say a mipcrot as described is an array, by definition (as an ordered sequence of values which permits duplicates). Now, a stack does typically allow you only to access the topmost element. However, you can indeed peek at all the contents of that stack through a RAM dump - you can essentially violate all abstractions through direct memory access - but TheMapIsNotTheTerritory: Values exist independently of their representations in memory, and sneaking a direct peek at the representation does not change the value in any way. Accessing a value through its exposed interface will, excepting particularly poor or unsafe designs, never permit one to circumvent the established invariants and restrictions of its type. (An example of an "unsafe design" for a stack type is the hardware-provided stack on the 8051 microprocessor: It has no way to detect stack underflow, which means continuing to pop values from it will begin to produce values you never pushed onto it, from elsewhere in memory. I hesitate to lump such designs under "poor", since the lack of bounds checking is a necessary optimisation for efficiency reasons, but it does mean this particular stack type does not provide the typical guarantees a stack should.) -DavidMcLean?]

Re: "Values exist independently of their representations in memory" - Please elaborate, and specify how one verifies this claim.

[Very well, consider the following thought experiment. Suppose you represented the number two in various ways: write the numeral 2 on a piece of paper, take a clipping from a dictionary or other book that mentions the word "two", scribble "S(S(Z))" on a whiteboard, show the binary sequence 10 on a couple of LEDs, whatever. Having done this, try to change that value. You might turn on another LED, making the binary sequence "110", or you might erase one of the "S()" applications, or whatever else. Observe that doing this has not changed the number two itself: Two has not suddenly become equal to six, nor to one. Arithmetic has not ceased to operate as intended. This indicates that the number two has some existence independent of your representations - changing a representation has simply given you a representation of a different value, not changed the original value. Next, destroy all the representations you've created, then observe that again arithmetic continues to work correctly. The number two was not destroyed along with its representations; this indicates that the value exists independent of your representations and indeed that it continues to exist even without your representations of it. Does that help to clarify? -DavidMcLean?]

You are talking about a concept in the mind, which by definition makes it subjective. And how one thinks about "two" may vary.

[Re: "… a concept in the mind, which by definition makes it subjective" - Please elaborate, and specify how one verifies this claim. -DavidMcLean?]

I think it's pretty self-evident. I don't know how to state it more clear in English. Numbers don't actually exist as physical things. You cannot put a "7" in a test tube, only symbols representing "7" (to readers). Numbers are an abstraction; a concept in the mind. Math does not exist in the physical world.

[That part's perfectly clear to me. Please clarify as to how being in a mind "by definition" makes a concept subjective, even when it's a well-established and rigourously-defined concept like arithmetic. -DavidMcLean?]

Whether "math objectively exists" is a great philosophical question. To me, math is a case where two or more parties agree on rules for manipulating symbols. But nobody is forcing anybody to agree (except maybe in the "1984" reeducation camps). If those two or more parties agree on the symbol rules, then they can get further before they disagree on something. Whether those symbols and the manipulation rules are "real" or not is another matter. Agreement does not by itself create reality, just less friction.

It's somewhat comparable to the global warming "debate". If a group of people agree that the data is reasonably accurate, then they can go further to discuss the meaning of the data (patterns and causes). But if you don't trust the data collectors or presenters because they were allegedly biased by money sources, like the right-wing often does, then the impasse is a the base of the project. Most agree that integers are useful, or at least a UsefulLie, and using that agreement we go further. (Boundary conditions may happen such as when an accountant questions whether a chipped brick should be counted as "one brick".)

I'm sure we can all agree that 2 is a value, and computers often have to deal with 2s. There are other numbers, too. They're all values. (Of course, values can be more than just numbers, but we'll leave that for now.)

Well, okay, I agree, but perhaps merely to communicate with other humans in order to get a paycheck. But otherwise, one does not have to accept that numbers objectively exist outside of human fantasies.

[Perhaps so, but be cautious not to equate "objectively exist outside of human fantasies" with "objectively exist at all". Arithmetic and mathematics might only truly exist in minds - at least until we can store and track infinitely large numbers in computers, and things like that - but mathematics is definitely objective. Colloquially, you might think of objectivity/subjectivity as determining whether, given a particular statement, the truth of that statement is dependent on who's reading it. For instance, "1 + 1 = 2" is true regardless of the reader's opinions on numbers and addition, just as "1 + 1 = 3" is false. By contrast, a statement like "Heathers deserved way more than two-out-of-five stars" may be true or false depending on the reader's opinion of the work, and neither option is strictly incorrect. (Interestingly enough, a statement like "I think Heathers deserved way more than two-out-of-five stars" is objectively true, since I do think that, whether or not the reader agrees with me.) Since it is clearly possible to be wrong when you make a mathematical statement (for instance, if you say "1 + 1 = 3"), arithmetic and mathematics in general is objective. -DavidMcLean?]

The math only "works" because most agree to the symbols and symbol processing rules. If Heather's reward symbol processing system was agreed on by all parties, we'd get similar consistency.

[Yes, of course. Systems like arithmetic work because most know the objective rules that must be followed to operate the system correctly. Is there a problem with that? -DavidMcLean?]

What exactly is an "objective rule"? Cannot such "objective rules" be applied to Heather's grades also?

[The "symbol processing rules" to which you referred a moment ago are an example of objective rules. They describe valid transformations from one symbol string (or, perhaps more correctly, from one expression, denoted by a symbol string) to another; whether a rule can be applied and what results from its application are both determined only by the rule's definition and not by the opinions or feelings of the mathematician applying it. In addition, the truth of statements regarding these rules is also objective (for example, "'1 + 2' is an equivalent expression to '2 + 1'" is an objectively true statement). Another way to look at it is to note that mathematics admits to the possibility of a statement simply being wrong (such as by claiming "1 + 2 = 3" but that "2 + 1 = 8"), which indicates the rules it's based on are necessarily objective rather than subjective. (Remember, subjectivity only comes into it if truth is dependent on the reader's opinions, preferences, feelings, and so on. A statement whose truth has no dependency on what the reader thinks of it is objective; any statement that can actually be wrong must necessarily be objective, because anything subjective could be interpreted as true or false depending on who's looking at it.) As for your second question: Yes, you could absolutely formalise a set of objective rules for assessing movies, television programmes, musicals, etc., and assigning a particular star rating. Given such rules, you could determine a star rating purely objectively based on the system thus constructed, simply by comparing the film's properties to the ruleset. Such a system wouldn't have wide utility, however, because the choice of rules for the system would be a subjective judgement; "an extra star is earned if there's at least one car chase" is an objective rule and one that might be useful if you really like car chases in your movies, for example, but it's not necessarily going to help people with no interest in car chases. You'd have a completely objective system, but you'd have a completely objective system for determining "what star rating Top believes movies deserve" rather than "what star rating movies deserve". -DavidMcLean?]

You seem to be conflating two issues: the agreement of symbol transformation rules, and the accuracy and/or utility of our symbol model in terms of doing "work" or predicting real-world events. We hope they are related, but they don't have to be. Math has proven to have a lot of prediction value, but that doesn't necessarily make the rules "true" in the absolute sense. As an analogy, it's possible to make an epicycle model of the solar system that has relatively accurate prediction value, but that does not mean the epicycles are real "things". Likewise, prediction ability of number systems does not prove numbers objectively "exist". Perhaps we could say its prediction value is "objective" to some extent. Similarly, a movie analyzer symbol system may have some objective prediction ability in terms of whether a movie is a hit or not.

[You seem to be conflating objectivity with accuracy and/or utility. There is no requirement that a system or property have prediction value to be objective. Heck, there's not even a requirement that you can actually prove a given statement's truth or falsehood for that statement to be objective: "Program X will halt" is objective, for example, but it's been proven you can't determine whether it's true for an arbitrary program X. Mathematics is objective since its rules are not dependent on the feelings, opinions, etc., of the person applying the rules; this is the case regardless of whether those rules are applicable to real-world situations in any way. Of course, mathematics is mostly useful because it has prediction value in the real world, but that's orthogonal to whether it's objective. -DavidMcLean?]

How does one tell if the base rules are dependent on feelings or not?

[Trivially. Does addition stop being commutative if you feel it shouldn't be? Does it stop being associative if associativity of operators offends you personally? There is no mathematical rule that says "x + 2y is equal to 2x + y if you're in a bad mood and you don't want to waste more time on this stupid maths problem". Mathematical rules don't change based on your feelings. They're objective. -DavidMcLean?]

Yes, that's what I'm asking. We accept commutativity as a base rule because?

[We define addition as commutative because it corresponds to the real-world behaviour of "adding things" - two apples plus four apples is the same as four apples plus two apples. That's one of the things that makes mathematics a useful system; however, as noted, the correspondence between the mathematical model and real-world situations and the prediction ability arising thereof remains orthogonal to the objectivity of the system. -DavidMcLean?]

So the rules are created or amended based on observations. Same with a formula for successful movies (accuracy percentage aside).

[Sure, why not? Mathematics doesn't have a monopoly on being an objective system, so there's no problems with what you suggest there. -DavidMcLean?]

My point is that one does not have to accept that numbers exist. Perhaps one can objectively say that Model X (arithmetic, etc.) has prediction ability, but that does not by itself mean that the parts of X objectively exist, just as epicycles (probably) don't exist even though we can predict planet movement using them. TheMapIsNotTheTerritory. -t

[Your point doesn't follow from your argument. Numbers exist, along with an assortment of rules and properties to which they adhere, within mathematics. Are you equivocating existence with physical existence? -DavidMcLean?]

Is there any other type of existence? We can consider arithmetic a virtual world and say stuff "exists in that virtual world", but that's relative and in the mind. Mind A is not forced to use the same virtual world as mind B, making it a subjective choice. Arithmetic may be accepted via a mutual agreement among two or more parties, but accepting the agreement is a subjective choice. The chipped brick example is a real-world situation where one may not accept application of integers to a certain physical thing, for example.

[Sure, except that being in minds doesn't make it subjective. It doesn't work that way. Again, objectivity is determined only by whether the truth of a statement depends on the feelings, opinions, etc., of the reader. Does "1 + 1 = 2" stop being true if you feel summing one and one should give you eight? Does "1 + 1 = 4" start being true if you, say, think four deserves better representation in maths? -DavidMcLean?]

But "being true" is in the head. Or perhaps we can say it's "true within symbol & rule system X". It can be "objectively wrong" with respect to a rule set, but that's not the same as being objectively wrong in general. It's possible "1 + 1 = 3" for retailers of collector items. The toy horse may be worth $10 and the toy cart may be worth $10, but if the collector vendor puts the horse and cart together, they may be worth $30 instead of $20 because buyers appreciate complete sets over parts.

{It is objectively wrong in general. If you change the rules, you are talking about something else (even if you use the same symbols). What happens with that "something else" doesn't make the original statement right.}

In general? I'm not sure what you mean. If somebody takes math M and makes the fork math M' out of it, then the rules depend on which math is the context. "Objective with respect to" still applies. Math M is a UsefulLie and math M' is also a UsefulLie. Imaginary numbers is such a fork I believe: it's an alternative math system that borrows from the traditional math and has prediction ability in certain aspects of electricity. (And I don't give the original M any special ranking, other than perhaps being the most popular and most used. But that doesn't change it's objectivity value.)

{I meant that there aren't any exceptions to 1 + 1 = 2. If someone creates a "math M'" out of "math M", "math M'" is something different from "math M". What happens in "math M'" doesn't affect what happens in "math M". In "math M", 1 + 1 = 2. Furthermore, imaginary numbers are not such a fork. It's not an alternative math system, it's part of the mainstream one.}

What do you mean by "exceptions"? In the real world or in a virtual "math world"?

{Mathematics aren't about worlds, virtual or otherwise.}

Oh really.

{Really.}


5 can be viewed as a set of one.

While not really relevant to the rest of your argument, this isn't really true. I've seen a number of attempts to create a system that makes {x} = x. Invariably the systems are inconsistent. (Note I'm not saying it can't be done, but it's a lot harder then it would appear.)


There are types. Types specify the behaviour of each instance of any set of things. When we name a person as a solicitor, we are specifying the (professional) behaviour of that person. A carpenter would specify different behaviour. If we had no types we would have no behaviour, just existence. Types are the natural tool our intelligence has evolved to assist with communicating within the complexity of this universe. If there were no types, all things would be things. Useless for communication.

Once a type is assigned to a thing, then the thing can be discussed in that context. In software, once a thing is defined, it has a type. Maybe that is just "attribute" or "entity", but it is a type.

ThereAreNoTypes may be referring to the practices of assignment of types in software. In so called TypelessLanguages?, all variables can be considered as slabs of memory with an address. They can be considered as objects which have a length, an address, and enough behaviour to be generally useful, like access methods. The behaviour provided in the language can be applied to any variables in the language. In typed languages, each variable must be given one of a small set of base types provided by the language. The behaviour provided in the language can be applied only to those types which the language considers appropriate.

What about user-defined types? Should a distinction be made?

Ideally, no. But very few languages allow the addition of a user-defined type which has all the features of a built-in type.

-- PeterLynch


The way I see it, types and data do not have to be distinct. However, it is handy to do so. For instance, a language in which all types had a single nullary data constructor would have only types, not data. However, it would still be Turing complete, because any information could be encoded in the form of a datum's type. On the other hand, in a language like B, which has only one type of data, with a large number of data constructors, you could encode the same information as data rather than types. Thus, the two notions are equivalent. A type is simply a piece of information attached to a datum. It differs from other data in that it is fixed at compile time. Thus, a given operation can be sure that the datum it is receiving satisfies certain qualities which make it possible for the operation to perform its job. Therefore, types and data are equivalent constructs which are used in different situations, and are both important.


I hope I'm not breaking a cardinal rule by posting a potentially stupid question. I've read a lot of the pages on types, TOP, and OOP; I've come to the point, however, where I need an answer to a question before I can keep reading. Here is my question: do types exist in data persistence? If so, then there are types as defined by how you persist the data. If, in fact, there are no types, do I save everything as a BLOB (which is a type, yes?)? Sorry, my education is lacking by far I'm sure, but I guess that's what I get for being a developer with a B.F.A. Lastly, I do see that my asking how to save my data is an implementation decision. [ScottNeumann]

Perhaps DynamicRelational and DoesRelationalRequireTypes will either answer your questions or have links to related topics.


Inside computer memory, ThereAreNoTypes at all: the entire system state is just one big freakin' integer. Types are a way of specifying human interpretations of parts of this integer. A certain chunk of bits, interpreted one way, is the (null-terminated) string "foo"; interpreted another way, it's the (big-endian) number 1718578944 ... which might be a measurement in millimeters, or the IP address 102.111.111.0, or the population of sim-animals in a simulation. Types provide a way of representing what interpretation we intend of that chunk of bits.

You can interpret the system memory as just one big freakin' integer, but the computer certainly does not; I have never once seen a computer apply integer operations unto its entire system memory. Types exist in the language, and the types traditional CPUs deal with are words (which themselves might further be described by the types of operations you perform upon them... e.g. floating point vs. integer vs. indirection) and instructions.


It would ideally be better if we had more specific constraints instead of bit/byte/integer types:

    this column may only contain numbers from 1 to 10

this column may only contain strings that are a length of 15 or less

this column will not accept numbers and only accepts A to Z in upper or lower case

this column will not accept decimals but accepts numbers
Instead of thinking in 256, 8, 32 bit, etc. it would also be nice to think in 10's, 100's, 5's. One problem is the computers are binary.. they only have 2 fingers instead of 10. It would be nice if our types were more human oriented... rather than binary. There are some computers out there that were not binary but they never took off.

Premature optimization is also another issue with types (declaring a Byte and thinking in Bytes instead of some other flexible type). Types can cause BrainDamage in that they move us away from our ten fingers and into the two fingered CPU brain.

If a "type system" offers constraints.. and if there are no types then there must just be some other form of constraint system which could be reinventing types.

Types are also about abstraction: moving away from the CPU bits - what will replace types?

NumberTypes and ReallyBigNumbers discusses some of these issues.


I believe the constraint system you are looking for is called the "interface" system, AKA the PerlSix Role system. Hypothetical code follows:

    role INumber {
       def infix:<+>($other does Inumber), infix:<->($other does INumber),
           infix:<*>($other does INumber), infix:</>($other does INumber);
    }
    role IInteger extends INumber {
       require {$self % 1 == 0}
    }
    class Integer does IInteger {
       #code for Integer storage and operations here.
    }
    $foo does IInteger = new Integer();
It's TypeSafe. It's object-oriented. But the interfaces are king--king of polymorphism!

Heck, we could use prototypes, and this approach would still work!


See: ThereAreNoTypesDiscussion, LimitsOfHierarchies, EverythingIsRelative, WhatAreTypes, SetsVersusTypes, PolymorphismLimits, PredicateDispatching, ObjectOrientationIsaHoax (article), VariationsTendTowardCartesianProduct


CategoryPolymorphism, CategoryLanguageTyping, CategoryHierarchy, CategoryTypingDebate


EditText of this page (last edited August 4, 2014) or FindPage with title or text search