Summary of extraordinary kind at TypesAreTypes.
As a compromise, I have agreed to drop the issue of a standard definition, and instead now sell the "side flag" idea as a "conceptual model of type implementation" to be moved to TypesAndSideFlags. (New topic titles are suggested below.) I would also note that almost every "type system" ultimately involves "side flags" paired or closely associated with "values" in its implementation. This pairing tends to be a "universal characteristic", even if not a defining one of most typing systems. (An interesting test would be to study typing systems without side-flags, if there is such a thing.) Enjoy. -- top
How about TypesArePilonsLikeOnTheStreet? or TypesAreMarkers? or TypesAreLabels? or TypesAreFlagsNotOnTheSideButOnTop?, or TypesAreKindOfLikeSectionsOfStuff? or TypesAreWhatTheDictionaryAlreadySaid?
If it helps with grokkability in the field, go for it. If a mental tool works for you, good. Just be careful not to force it on others if they can't make use of it.
Those pages were suggested as a joke. If types are side flags then they must be pylons too. But they are not pylons. Pylons are pylons. Let's be reasonable. Types are a classification system, a way to categorize a domain, a way to group items. If someone in a soccer game throws a side flag out and says that the soccer ball is out of bounds - this is not a type. It's an act of throwing out a side flag. The side flags themselves are not types, just as pylons or hand wavings are not types. See TypesAreNotTypesTheyAreTypes.
I don't see how this relates to the above. It talks about mental models to help one work with an idea, not a universal declaration about what something "is".
Gee, what is the synonym for "are"? One synonym is "is". Yes, the word "is" is really a synonym for "are". Now, let's logically deduct: what is the title of this page? TypesAreSideFlags. TypeIsSideFlag?. It's a cute, silly, funny metaphor - but even as a metaphor it stinks. It's wrongo bongo.
Please clarify.
http://en.wikipedia.org/wiki/Deductive_reasoning
Is equals Are. I use logical proofs when I write and argue.
Try adding a sprinkling of clarity into it also. Your is/are rant makes no sense.
I would not be against the idea of changing the title to SideFlagTypeModel? and drop the issue of using it as a definition. The main purpose is to provide a useful mental model of "types", and the definition issue seems to generate controversy that is mostly irrelevant to the modeling ability of the concept. In other words, "types can be viewed/modelled as flags" is what is important, not so much "types are flags".
- It would be a good start. After that, can you please start writing down the prediction-rules that allow one to... uh... actually do some modeling, perhaps? and describe how this model is to gain 'value' or 'use' by form of attachment to 'real' situations to be modeled? i.e. take that CF vs. PHP 'example' and show exactly which predictions you're making, and exactly how those predictions are reached by the rules in the model. At the moment, this is more of a 'SideFlagNotion?'. (...) Eh? it seems [Mr.E. Brackets], below, interjected the exact same comment as I was writing. Funny.
[Might I suggest SideFlagTypeNotion
?, as it isn't really a model. Or, perhaps TypeTagNotion
?, to be in keeping with usual term for implementations of systems (e.g., dynamically-typed languages) that associate type identifiers with values.]
So that we don't have to hard-wire a prejudgement to the name, how about the original suggestion of "TypesAndSideFlags".
[I could live with that, but how about TypesAndTags?, which has a nice rhythm to it and uses a familiar term for a related (if not the same) idea.]
The downsides are that "tag" may be mistaken for mark-up tags. I feel "type flag" is a bit closer to the notion.
["Tag" is the familiar term for the concept. In context, it is no more likely to be mistaken for mark-up tags than "dog" is likely to be mistaken for a canine mammal when talking about fireplace fixtures.]
Can anyone suggest a less-enlightening page on this wiki? I think I'd like to read something that makes this appear interesting and attractive. There must be something here worth reading. After all, it's seen a lot of activity ...
Curiously, almost the entire content of this page is debate over whether its essential notion is worth anything, let alone reading. My recommendation is not to bother...
- I have again returned these comments to the top of the page. They serve as a warning that this page is unenlightening, argumentative, full of insults and contains little, if anything, of interest to anyone except the participants. And I even have doubts about them.
- The name of the page is simply wrong. Types are not side-flags. You may feel that "side flags" - whatever they may be - is a useful way of helping people to think about types, but whatever a type may be, it is not a "side-flag".
[As one of the participants, I can confirm that this page contains nothing of interest to me, and I'm rather ashamed to have taken part in it, especially the insults. Unfortunately, I find few things more frustrating than a
ChronicallyRight HostileStudent, and I reacted accordingly. I think this page should be deleted. If top thinks it is of value, he can write up "Flag Theory" as he sees fit and put it on his Web site.]
I didn't start the insults. I only responded in kind. First Blood is not on my hands. The (self-elected) teacher insulted the student first. YOU are the HostileStudent because you won't listen. "Type Theory" is your bible and you defend it like any other zealot. It's too convoluted to be useful for the vast majority of IT workers and you admittedly don't give a shit. There's a big gaping flaw and you try to plug it by stuffing the body of the messenger in the hole.
Oh, you certainly did start the insults, top. You did so by saying that "all the competitors" were no better than this pile of crap.
- I insulted ideas, not people (originally).
You just don't understand how insulting that is, seeing as you don't comprehend that your pet-theory is a pile of crap. There're many big, gaping flaws in your model that you've been stuffing with your ego alone. When we point them out, you attempt to distract us by waving your hands, calling them implementation details, or blustering about how accepted type theory couldn't possibly be better. We might attempt to explain to you, quite thoroughly, why you're wrong to do these things. THAT is when you become the HostileStudent.
And TypeTheory isn't convoluted, it's just deep and broad. Understanding its depth and breadth is Masters and PhD level work. IT workers can stick with the shallow-waters stuff like simple primitive types and generic types, and basic uses (type safety and dispatching). That's perfectly fine. But no one with even a shred of honesty would try to dumb it down (to 'TypesAreSideFlags') and leave IT workers with some misbegotten delusion that they possess an understanding of TypeTheory that they do not. However, I can see how a true charlatan might create some shitty theory that he can sell, that IT workers would accept because it makes them people feel better; people feel empowered by a delusion of knowledge. Charlatans do such things because it feeds their ego, sometimes even feeds their belly. I expect that this page was written by a charlatan.
- I've rarely seen more self-grandizing elitism in my entire life. People have been using "types" long before "type theory" was published, and most don't read it anyhow. PhD, my ass! And, you have NOT shown any significant flaws in the flag model, only those that you manufacture.
- [If a carpenter criticised a non-carpenter's assertion that houses would be better built from loose sand than wood and brick, would that be "self-grandizing [sic] elitism" or a valid response? My reading of this page finds that numerous significant flaws are shown in the flag "model", but as a participant, I admit the possibility of bias. Therefore, I call for independent adjudication.]
- I'd say that's hardly elitism. I'd further note that any further explanation the master carpenter provided beyond naming the flaw is the exact opposite of elitism: it's a lowering oneself, a casting aside of other priorities, to aide in the education of someone with lesser knowledge. Of course, it's pointless to try helping a HostileStudent. If you had the heart to let him die, you might as well let that sort of person build his house of loose sand rather than spend forever arguing. More a problem is some idiot non-carpenter that plans to preach his stupid notion to the masses. Then, if you have any sense of morality, you step up as the master carpenter and say to the masses that the other guy's an idiot, provide a brief explanation as to why, demand that he prove his theory by building his own house and living in it, and tell them and that they'll need a truckload of salt to go with all that sand. The other guy might still win them over by loud blustering or charisma, of course, but you'll have at least tried to do what you know is right. Is that elitist? No. It's moral imperative that just happens to be coming from the elite.
- (to top): Frankly, it isn't my job to prove you wrong; it's YOUR job to prove you're right. You're the one proposing this idea. I pointed out what I believe are flaws. It doesn't matter that you aren't convinced by me; like a user of a program pointing out the bugs, my job ends upon describing the flaw. If you wish to promote your theory/model/crap, it's up to you to convince me, [Mr.E. Brackets], and the rest of the educated world who sees these flaws that they aren't actually flaws. That's true at least if you're an honest-to-goodness person who wishes to help the world. And you know what, you've failed. Utterly. Instead you adopt a stance of stubborn dig-in-the-heels I'm-right-unless-the-world-proves-to-me-I'm-wrong oh-and-I'm-not-listening-LALALALALA! arrogance and idiocy. I've seen it from a handful of people in my life, and I hate all of them (including the current U.S. President). Beyond that, like a charlatan, you sneak past those people who actually know better and prey upon the ignorant. That at least seems to be your intent; you certainly aren't willing to at least gain some approval or endorsement from 'peers' that you've made a legit simplification before you "bring it to the masses" (or try... I don't think the masses will listen to you either).
- Re: "I'm-not-listening-LALALALALA!". I find that a good representation of YOUR behavior. You use the same arguments over and over even after I deflate them. -- top
- You haven't deflated any of my arguments. Go find me one place you think you've 'deflated' my argument, and I'll point out your fallacy. On this page, 20% or more of your fallacies will be that you assumed your conclusion: i.e. you attempted to use 'TypesAreSideFlags' as a defense of 'TypesAreSideFlags'. Another 20% will be of equivocation: you decided to make a conclusion about "side flags" and therefore about "types" as everyone else uses the word, despite the fact that making any REAL conclusion about "types" requires using the word as everyone else uses it... which is not as "side flags". Another 30% will be argument by ego: something is true because you said so (e.g. "I consider X" as an argument form). The other 30% will be scattered, I'm sure... though I think a good number of them will be that you simply waved your hands a bit and left the argument, and a good number of them will be places we could pick up again if you wish to do so (e.g. the whole functional computation and typing issue, and the whole where-do-types-come-from issue).
I'd be okay with deleting the page, but I have a feeling that top would just take advantage to further his own agendas by writing it again, this time without the criticism, and that if we provided criticism again, he'd get all blustery and hand-wavy and we'd end up with the same monstrosity we just deleted. That sounds costly to me.
[Agreed.]
Insult Pledge: Can we agree to at least mutually stop the personal insults? We both think the other side is highly biased and/or manipulative. We don't need to keep saying this over and over.
This is a viewpoint on WhatAreTypes in which "types" are defined as "side flags" or dual-cell variable nodes where one side of the node is the value or value-container and the other is a type indicator or list/set/bag of type indicators (depending on language or run-time engine). The flag affects the interpretation or usage of the value. The key points are:
- Flags and values must be "separate".
- Changes in a given flag have the potential to affect the outcome.
- There is some discernable relationship between the flag and the value.
This flag(s) is intended to indicate the usage, interpretation[1], or processing of the variable/value; and can be used by the interpreter or compiler (I/C) to select which operation to apply to the value. The I/C can also follow a substitution tree or DAG, per language, to find the proper type. See
RunTimeEngineSchema for a simplified example of how substitution may occur. ("Variable" means like X in "X = 7", not "flexible sized". However, variable nodes may have more than just values and type flags. This definition only requires the minimum of the 2 items and permits other items, such as length, scope indicates, etc. as needed per language/engine.)
This definition does not dictate how the flags are actually used[2], only that they exist and make a difference in the results. That is, if the values of the flags are changed, the output also can potentially change. Also, there must be some establishable relationship between the flag(s) and its corresponding value(s). (It is undecided whether such flags used merely as documentation are still "types". The human is "processing" them even if the computer does not, so perhaps it also counts, depending on the scope of execution we are considering.)
- If comments aren't part of the type-system, then this particular issue should be considered well decided.
- First, we need a clear definition "type-system" to determine that. One thing at a time.
- I don't think we do, unless you believe that comments, which are never even processed by the agents that have the final say in how the language is interpreted, can possibly part of the type-system, whatever said system might be. If you want to play a game of regression to avoid answering questions, how about: First, we need a clear definition of "definition" to determine that. One thing at a time.
This views types in terms of data structures, unlike competitor definitions, which are tied to "syntax", allegedly making them over-focus on human-read symbols and characters which, according to
TypesAreSideFlags, are not necessary to have types. Types can still exist if the source code is lost or burns up in water heater explosion. I/C's usually turn syntax into data structures before processing, and these structures almost always have some form of "side flag" nodes if they have types. An exception has yet to be found.
[1] See PageAnchor "spreadsheet_scenario" in AreTypesTiedToSyntax.
[2] Explicit techniques should have explicit names, such as "DAG-oriented Type Substitution Theory".
- And abstracted techniques get abstract names, correct? Such as: "Type-predication consistency checks", or "constraint-based type consistency", or "Hoare-logic based type-contract validation" - all three of which run contradictory to your particular definition of types, mostly because arbitrary predicates aren't flags. I feel I should also note that theories aren't techniques, including DAG-oriented ones.
- Let's explore a specific example that allegedly is contradictory rather than go LaynesLaw over the difference between "theory" and "techniques".
- Sure. Which of these type-systems do you already understand well enough to provide an example? We'll start by exploring that one. But I'm NOT going to turn this into an exercise of teaching you a type-theory when you'll be spending all your time trying to find a way to associate it with 'side-flags' instead of -understanding- it.
- This topic *is* about type flags and not John's Type Substitution DAG Techniques, or whatnot.
The Scope of the Definition
The flag model is intended to provide a predictive "thought model" to understand how (and if) languages treat types in various situations. It is not necessarily an implementation model because compilers/interpreters do not have to use flags to provide a type system, even if it behaves equivalently to a flag-based model. If the behavior of a language/tool matches a flag-based model, the model can still be useful even if it is not the actual model used under the hood. In this sense, it may serve as a UsefulLie until the point a better model is discovered or learned. To my knowledge, flag-theory has no simple competitors, only complex ones.
Also, this definition does not attempt to explain in any detail the philosophy of types. This is largely because philosophy is often based on psychology, which is more art than science when it comes to computers; and because everybody's psychology varies such that a viewpoint of one mind may not be compatible with another. It is hard to test for "wrong philosophy", especially if there are multiple viewpoints which appear valid (lack obvious internal contradictions). A definition that involves heavy use of psychology and psychological assumptions is likely to be a problematic model. Thus, the flag model generally avoids it, focusing on predicting behavior instead.
Parsing versus Types
Sometimes values are "parsed" to see, or "guess" what type they are. Before such parsing takes place, the item has no "type", according to the above definition. However, a type can exist after the parsing is done. Example:
x = readItemFromFile(...);
r = guessType(x);
Here, "x" has no type after the first statement, but does after the second statement because "r" becomes the "side flag" for "x". (This is assuming that readItemFromFile does not include a side flag of its own.) It is true that some languages always have a type flag for all variables whether it is used by the program or not. But in the case above, it may be set to some default, but useless value, such as "variant" or "object" or "auto" if not explicitly assigned by the application or API. (This example is of "roll-your-own" types and not an example of using the built-in language type system, assuming such exists.)
If "guessType" is functional, it's quite reasonable to say that 'x' had type 'r' before the guessing ever occurred. Your statement and 'above definition' to the contrary is not defensible.
But there could be gazillion other versions of guessType, which may all return different answers. Thus, if we open the door to "ghost types", then things are no longer deterministic, or at least useful in any way.
r1 = bobsGuessType(x); // returns "foo"
r2 = bettysGuessType(x); // returns "bar"
r3 = mikesGuessType(x); // returns "date"
...
r[infinity] = infinityGuessType(x);
You can say that "types" have to be an observable property, not a potential property, in this definition.
Nonetheless, you can encode implicitly (in zero bits) that 'guessType' is the correct one to use. If 'guessType' itself calls a bunch of other 'guessType's, then that's fine. Further, I've trouble seeing why you object: according even to your own statements, r1 and r2 and r3 and r[infinity] all now constitute "side-flags" and therefore types for "x", the moment you choose to utilize them.
A given value can indeed have multiple types. The definition does not exclude that.
- I agree. Something can be an Integer, a Natural Number, and a Prime Number all at once. Things can have multiple properties, and can therefore have multiple types. So, why were you saying these are "ghost types" or "no longer deterministic"? If every one of these 'guessType' operations is functional (in the pure, mathematical sense) then every single one of them is deterministic. But, by my approach, the type of 'x' wouldn't actually be "bar" or "foo" or whatnot, it'd be "this is the type of thing that returns 'bar' under bettysGuessType". And that type is true whether or not you actually perform the test and store "bar". If you do demand a single type, you can use the fact that the communication context always carries implicit content... such as implicit information about the encoding of type. People that want there to be such thing as a "most specific type" upon which they can dispatch function calls are the most likely to demand that there be a single "most specific type" for any given value, but while I understand their purpose, I'm not among that crowd of type-theorists. Since you stand, instead, with the group that says that a value can have multiple types, then why did you attempt to use that as some sort of counter-argument? It seems the big difference between us is that your definition would require infinity flags, while mine requires none at all.
- I'll revisit this when the "is programming math" issue is settled.
Unless you know (implicitly) that exactly one choice is the 'correct' one, would it be more correct to say that x has ALL these types or NONE of these types? And, regarding "observable property", yes, types are defined by their properties - those you can prove to be present or absent. Dynamic typing allows you to wait until you actually have the value-instance (for testing difficult properties like isprime) and static-typing focuses on what can be proven based entirely upon the structure of the program. Observation of a property is just one sort of proof.
(moved discussion below under "Programming versus Math".)
In general, if you have to do complex calculations to determine "type", then it is not a flag. Flags are "pre-packaged" and easily digestible. That's the main difference between "parsing" and "flags". Types don't exist until the knowledge of types is readily determinable without complex computation. (There may indeed be a gray area where the flag is somewhat obfuscated, and thus may be a half-ass flag, in which case we get half-ass types.)
Contrary Viewpoint: Types can often be determined by complex calculations. Types are not flags. You can, if you wish, cache the result of this complex calculation, and call this cached result a "flag". Philosophically, the types must exist before they are determined by complex calculations - properties and names of things that don't exist cannot, in any sense, be determined. Thus, types exist, can often be determined by complex calculations, and the result of said calculation can optionally be cached.
In theory it may be possible to know the results of every possible function or program to ever exist and thus dispense with the program. If we want to take an existential trip down somebody's EverythingIsa world, then indeed everything is everything and nothing is nothing. Do variables still exist if a given program is rewritten such that no variables are used, only functions feeding functions? One could perhaps argue that such is a black box such that one couldn't experiment with the innards to see if something has a type flag not, and thus is unknowable. You can't test something you can't open. You are drifting away from the practical issues and into existential philosophies of what programs are.
You're wrong. It is not, even theoretically, possible to know the results of every program because it is not, theoretically, possible to determine ahead of runtime which values will be received from sensors or communications.
When Do Types Exist
This describes where, when, and why type flags exist in various "flavors" of typing systems in common programming languages.
In statically typed languages like C, the flags exist at the compiler stage. It could be argued that they also exist at the text stage when there is an explicit declaration such as "int x;". The compiler keeps a type flag(s) for each variable in its variable table to track the type of each variable and uses this for validation, such as making sure one is not accidentally multiplying characters together (or dispatches the appropriate operation when overloading is allowed). When the compiler is done, it discards type information (type flags); and the result, the executable file, no longer contains type information. Machine instructions in the executable that operate on bytes do so dutifully without ever "caring" the "type" of the bytes. The bytes are not "marked" to indicate which operations are allowed or which operations are appropriate. Machine instructions are thus operating on "raw" values. Types don't exist when something can only look at the value and only the value.
- It'd be more accurate to say that types aren't part of the language-provided type-safety system if it forces you to -explicitly- look at the value. If the language made an implicit check, that'd be different. As to whether 'flags exist at the compiler stage': they needn't ever exist in any whole-program-representation (e.g. flags simultaneously attached to all variables or expression). You've already admitted that performing a complex calculation to determine the type each time is not a type-flag. The best you can say is that a 'flag' might briefly exist in the period between calculating type and taking action based upon it. But even that is optional... one needn't separate those steps, and they could easily be merged in a compiler programmed in ContinuationPassingStyle.
- There are too many general issues addressed in that paragraph. I suggest you divide it up and/or move those issues to their respective sections (unless already addressed, in which case please remove or reference them). Or, provide a compiler-specific example(s). I am trying to reduce ThreadMess by keeping like ideas together, if possible.
In dynamically-typed languages, the variables keep their type flags throughout execution, and can usually be changed programmatically when needed. To truly know what a variable "contains", you have to inspect both the value and the type.
- Actually, it's usually the case that the values keep the flags, not the variables. The variables in dynamically typed languages can take many different types over their lifetimes. The flags, in this sense, are 'part of' the value-description, telling you with a few bits how the other bits are to be interpreted. Dynamic type-safety is provided via the language providing -implicit- checks upon the values (and only the values) in order to ensure that computation can be stopped prior to performing an undefined operation.
In type-free languages (other terms are sometimes used), there are no type flags. Usually the variables are represented as strings or a series of bytes. How these strings or bytes are interpreted is usually up to each operator/function as encountered. These operations may have to do explicit parsing to check to see if a value can satisfactorily be used in the given operation. There is no flag to check. As mentioned below, a language can be "type free" on some levels but not another. For example, it may not track types at the scalar level (integer, string, date, etc.), but do so at the scalar-versus-structure level.
Criticism:
This is a very informal model for a typing system, and an even worse mechanism for defining types. You might as well have titled this: "TypesAreTypeFlags". Of course, that wouldn't obscure one of the basic problems with the theory - ultimately the "types" are still out there, undefined.
- Just like its competitors. Note that this attempts to define "types", not "type systems". AreTypesTiedToSyntax describes why these things are different (although related).
- "Just like its competitors"? You're deluding yourself. I've amended the above statement to better reflect the criticism.
Consider, for a moment, that some agent must ultimately determine which "side-flags" a cell/value/object/variable receives. Under the definition proposed above, all variables are of the single-typed
there are no-flags until an agent assigns at least one flag, so under this proposed definition, the annotation of type information actually
changes the types rather than indicates truths implicit or explicit in the code.
- I am not sure what you mean. A definition of "chair" does not have to include who/what *put* the chair there.
- Eh? But identifying that something is a chair, and thus should be tagged with "chair", requires an agent... most likely the programmer, but it could be automatic. And under your claimed definition of types, that chair wasn't a chair before it was tagged with "chair". It was simply an object that could receive tags. And by tagging it with "chair" it becomes a chair. Of course, an agent might choose to rely upon a definition of "chair" to decide whether to tag the object with the word "chair", but YOU are claiming that "the flags define the type" (not the 'definition of "chair"'), so enforcing that discipline on the agent would actually be outside your definition of types. There's nothing at all in your definition to keep your agent from tagging sofas, bean-bags, tables, televisions, cats, dogs, and clouds all with "chair". To the contrary, your definition of types is such that an object possessing all the properties of a cat is, indeed, a "chair" if so tagged.
- Completely contrary to this mistaken 'TypesAreSideFlags' notion, most reasonable type theories define types in terms of properties... be they intrinsic or extrinsic. They make the responsibility for type-nature part of the object and its environment: Integers, Prime Numbers, Composite Numbers, Happy Numbers, Big Numbers, Small Numbers, Characters, Strings, Strings-that-represent-unexpired-certificates, etc. etc. etc. As such, types are abstractions. Under these systems, something needs to have chair-properties to be a chair, i.e. it must meet the definition of "chair" or have at least a strong, fuzzy affiliation with "chair", and no tags or flags are going to be saying otherwise!
Further, most
definitions of types are
not tied to syntax, despite the ridiculous claim above.
- Is was characterized that way under WhatAreTypes and JohnReynoldsFableOnTypes by pro-Reynolds people. If you don't like their characterization, I suggest you provide a working definition of your own for reference.
- You are completely misunderstanding the lessons taught on those pages. The JohnReynoldsFableOnTypes talks of structure for -representing- a value of a particular type, and makes it very clear that the type Complex is not tied to the syntax. You should at least research your counter examples. Go read the damn page again before wasting my time with your chaff.
- That page does not contradict flag theory in any clear way. It is more about a specific technique (such as DAG-based substitution) for using type flags, or the utility of types rather than what the essence of "types" are. It is more "why" and "how" than "what".
Describing that a particular program structure is intended to possess a particular type is tied to syntax because
descriptions are always tied to syntax, thus syntax is a necessary component of any type system.
- I am skeptical of this, but first we need a rigorous definition of "syntax" so that I have something concrete to falsify.
- Listen to yourself... already trying to think of ways to counter a challenger rather than learn from him. Ever find you don't understand things even after butting heads over it repeatedly? Perhaps you should butt heads less frequently. Sigh.
- However, I'll provide: The study of syntax is the study of patterns and structure in the representation of language-expressions. A particular syntax is a particular set of patterns and structures to represent a subset of language-expressions of precisely one language. A formal syntax is a particular syntax that forbids 'error' except insofar as the syntax itself includes rules (patterns and structures) to resolve them... making them not 'true' errors. A complete syntax is a particular syntax that includes patterns and structure for a whole language. A syntax can be complete and formal. A syntax is explicitly not a description of syntax; a description of patterns and structure over the representation of language-expressions is a 'syntax-description'. By necessity, a syntax-description is a language expression (because all descriptions are language expressions) and thus subject to representation in a syntax. A syntax-description need not be provided in the same language so described; indeed, it is not at all unusual (in practice) to see many different syntax descriptions for the exact same syntax (BNF, RAG, YACC, etc.).
However,
the types themselves are values, standing well above the syntax that represents them.
- So it is possible to "stand above" syntax? How does one know if this is taking place?
- All values "stand above" the syntax used to represent them. It's fundamental. You should already know this. The character sequences "three", "3", and "s(s(s(0)))" all represent the same value with different syntaxes (in canonical, fully evaluated form). Because these do represent the same value, the syntax is at a lower level of abstraction than the value... thus the value must "stand above".
Even in such primitive cases as 'int' and 'float' in C programs, the type is not the physical, syntactic name "int" or "float"; rather, the type is the concept these descriptors represent.
- We cannot accurately measure "concepts" as they exist in a human head (barring reverse engineering neurons firing), and thus should try to avoid definitions based on them if possible.
- Disagree. Even if we can't measure "concepts" as they are represented within a particular computational system does not mean that we cannot know what concepts ARE on a more fundamental level.
- Also, and as a complete aside, it doesn't prevent us from accurately measuring them as they exist within a mechanical inference system.
- (I will agree that 100% avoidance is perhaps impossible, and I've added "This flag(s) is intended to indicate the usage" above even. Thus, the definition is based on intent.) In almost any objectively-analyzable implementation, one can find these side flags. Whether the flags are the string "int" or int's ID is not material to this. To borrow your phrase, it's an implementation detail.
- Flags are an optimization detail in many systems. In most dependent-typing systems, OTOH, the combinatorial effect of actually tracking 'flags' for arbitrary types would be explosive (indeed, the number of type-flags can be enormous). It's often much cheaper to avoid flags entirely and just perform checks or calculate types as required, though flags can still be used for types that meet certain criterion. Still, flags as an implementation detail is a whole different ballgame than flags as the definition of types.
Types, themselves, are based most often in one of
SetTheory,
CategoryTheory, or
PredicateLogic.
- Yes, that is the operator-determination step. Note that types don't need this step. One can give data to clients/customers with type descriptions ("this is an integer, this is a date") WITHOUT any knowledge or certainty about execution. Thus, operator-determination should not be a base requirement a definition that reflects the concept as used in practice, and I think most agree that the artifact of such an exchange "has types" dispute uncertain execution. (I've added an "intent" clause because it is intended to be read by at least humans, because human readers can also make use of type info.)
- I can claim "2007 Aug 21" represents an "foobar", which happens to mean 'Prime Number', but that doesn't make it so... at least not in any sane type system. Any sane system would look at that and say: "Does not compute! Does not compute! Does not compute! ...".
- As a facetious aside: Who are you to be describing that which with "most agree"? I can't imagine that your track record in doing so is very good.
You just can't use them without a corresponding syntax. Period.
- Pending rigorous definition of "syntax".
As to the final claims above: (1)
Types can still exist if the source code is lost or burns up in water heater explosion. Of course. Types are values, and thus exist in concept regardless of representation, much like you can't destroy the value '1' simply by shredding a piece of paper with '1' represented upon it.
- So you agree that types can exist in say an internal parse tree/table after the source code is lost.
- I'm not sure I'd agree that the source code is truly 'lost' while you still have the internal parse tree/table... it's just been through some changes. If you have the concrete parse tree, you can actually regenerate everything but the comments. That said, type annotations (which are not the same as types) should still exist as part of the AbstractSyntaxTree, and any types that existed implicitly prior to reading the syntax should still exist afterwards.
(2)
[Interpreters and Compilers] usually turn syntax into data structures before processing, and these structures almost always have some form of "side flag" nodes if they have types. An exception has yet to be found. Firstly, it's foolish to think that one could write down or otherwise store "data structures" anywhere without some sort of syntax, and "anywhere" includes writing it
in memory. An
AbstractSyntaxTree is a transformation of the source-code and is, itself, more source-code... simply of the sort that abstracts away any sort of syntactic sugar.
- If you shift back to "types are anything that has structure", then it is a non-usable tautology, as described in AreTypesTiedToSyntax. I am looking for a definition that allows one to identify what is "types" and what isn't. You seem to be saying that everything is types.
- Types can be described by properties. Structure is a property. One can make a type of anything that has structure, if one wishes. That type would be: "all things with structure XYZ". This does not make it a tautology. (I'm reminded of a scene from Princess Bride: "'Inconceivable.' You keep saying that word. I do not think it means what you think it means.") Nor does it make it non-usable.
- Further, I've never said anything that even implies that everything is types. You are inferring incorrectly. Tell me what is confusing you and I'll find a way to reword it.
Secondly, use of "flags" to represent calculated information as to types is an implementation detail - an
optional optimization; the only reason one mightn't find exceptions is because one is cherry picking what one chooses to look at.
This idea that one can define types as 'side-flags', AKA top's "flag theory", is naive and not very deeply considered. The above author hasn't even begun to give serious consideration to some of the most fundamental questions behind the theory. This author suggests starting with: how do you determine the flags in the first place?
So is source code. In fact, the declarations are a kind of "side flag":
function myfunc() {
int foo;
double bar;
}
"int" and "double" can be viewed as "side flags". Thus, the definition holds up regardless of the form (source code, parse tree, etc.). The parse-tree example was only brought up to illustrate a flaw in the competition, not say it is the ultimate format. One should be able to take all possible formats/representations of a program or thing with types, and mark with a yellow pen what parts of it are "types" and which are not.
- "int" and "double" are manifest type annotations. One could just as easily have written "auto" and "auto", but that would not make foo and bar of type "auto". Manifest type annotations don't need to exist in all languages, nor do they need to exist as keywords in a language as representing something like a 'side flag'. Due to implicit typing, I think a far more interesting (and useful) exercise would be to annotate a program by marking all expressions and subexpressions with their relevant types... possibly color-coding it for the programmer. Type annotations and ascriptions can still be highlighted in yellow, if you'd prefer.
- "auto" just means that the type flag is undefined or a generic type. There is still a type flag in the interpreter or compiler (for typed languages). Although it may be omitted to save space for "auto", kind of like Nulls are omitted in stored table rows in some DBs, but plugged in when the value is requested. That is a case of where a lack of a flag is implied to mean "auto". It is just a space-saving trick. A side flag is still implied.
- There might be a flag in the interpreter or compiler if that is how the author of said tool chooses to implement support for typing. However, you're once again confusing implementation-details with definitions. Answer me this: what determines what type is "plugged in when the value is requested"? And if you can give me an answer, how can you reasonably argue that the flag -defines- the type? Was it not defined by whatever mechanism you just used to determine what was plugged in?''
Specific trials are welcome. The competition is welcome to show up also.
- The competition needn't show up here. New ideas need to gain ground on their own steam if they are to challenge established patterns of thought. You need to invest in this theory if you want to see how good or bad it actually is. As a person who studies these things deeply, though, I'm quite convinced that this 'TypesAreSideFlags' notion literally indulges itself in fallacy. I welcome you to prove me wrong.
- It is not a new idea, just not a formally published one. But if it is as rotten as you imply, it should be easy to debunk. If its hard to debunk, then it deserves more attention.
- I disagree with your logic. It's also hard to debunk the spaghetti-monster theory of Creationism, but that doesn't mean it deserves attention. Anyhow, I don't see how anyone could "debunk" this in your eyes... you have a great deal of faith in your pet theory that you've not yet even attempted to support or debunk with mathematics or practical applications, nor are you willing to even try to understand the logic behind any disproof... rather, your entire perception is twisted towards searching for anything you can misinterpret in favor of your theory. The only person who can debunk it in your eyes is YOU, and you'll only do so if you actively attempt to find the formal foundations for it and prove that it provides a sound mechanism for TypeSafety of one sort or another.
A good definition of "types" ideally should at least:
- Be applicable to any form of a program or thing that "has types" (source code, parse tree, run-time structures, etc.)
- Avoid being tied to personal psychology. Otherwise, everybody will get a different answer.
- Provide falsifiability, such as distinguish between what elements are types, part of types, and are not types.
I'll agree with these... except for the psychology one. I'll admit to that one only for typing of language expressions because languages are a shared medium between agents and, thus, cannot be subject to the psychology of any particular agent (psychology cannot be -correctly- imposed upon the representation or interpretation even if it is attempted by emotional humans in practice). Of course, the context for this discussion IS programming-languages despite the fact that you brought up identifying an object as a chair (which is object classification, thus subject to personal psychology).
"Side flags", more commonly known as tags, are often used to decorate values with type information in dynamically-typed languages. It is an implementation strategy - one that can be used to implement one of the elements of a type system. "Side flags" are not, however, either a theoretical system or the types themselves, because types are an abstract concept and are only meaningful in the context of a type system. "Side flags" are an implementation detail.
By way of analogy, note that relations and tuples are theoretical elements of the relational model. A common implementation strategy is to construct tuples as arrays of values, and construct relations using file-based B-tree indexes and file-based arrays of tuples. However, it would not be accurate to say that relations are, at an abstract and theoretical level, file-based B-tree indexed file-based arrays of tuple-arrays, because that's only one of many possible implementation strategies. It's also an exceedingly awkward way to perform reasoning about relations.
That is a bad analogy for three reasons. First, we have a fairly simply clean relational model to work off of. It can be described on a page or two in fairly simple language. Second, tables don't have to have indexes and small tables often don't. Third, in many ways "arrays of tuples" is close to "map of maps", which is almost what a table is in an abstract sense (if we make assumptions about what missing explicit columns mean). Thus, it wouldn't actually be far off. Thus, observing implementation *does work* to a large extent. Also, observing implementation would probably result in the observation that a primary key of some type is needed (DB's that don't are not fully relational). An alien dissecting a variety of RDBMS's would come close to identifying the correct commonalities.
- Actually, it's an excellent analogy for all those three reasons. First, we have three fairly simple and clean definitions of types to work with, which can be explained in a page or two of category theory, set theory, or predicate logic theory. We've proven these all equivalent, and we also have a commonality between these definitions, this being that TypesAreAnAbstractionOfReality, with an 'AbstractionOfReality' meaning that an object of a type possesses certain properties, but abstracts away the 'reality' of how said properties are fulfilled. Second, implementations of type systems don't need to have flags... it's an implementation detail just like indexing; one is free to test whether the properties are fulfilled every single time they are needed, even if this check is expensive. But who needs optimization? Third - let's ignore your third; you're not even talking relational anymore. As with studying implementations of relational, studying implementations of type systems *does help* to a "large extent"... but, as with studying implementations of relational, there are certain things you just won't understand... things that are just beyond the reach of that "large extent". An alien, or even a 'top', dissecting a variety of type-systems would "come close" to identifying some commonalities... but would still be wrong.
- You are starting to come across as very arrogant. Re: "implementations of type systems don't need to have flags" - I've yet to see a good example. At best, you've shown obfuscated flags, or cases where a system can guess "types" based on the nature of the value itself (parsing). But it could be argued that those are not really "types", but merely situational validation. And when the parser does "discover" a type, it generally stores that result as a flag of some sort. (But it does not NEED to do so.) A type-guessor system does not invalidate flag theory. Guessing is merely a guessed flag. "I don't have a real flag, so I'll just guess at the flag". If you cannot sample one's chromasomes, you can still guess at their gender by seeing where the bulges and lumps are. And "wrong" is on a continuum. AllAbstractionsLie, but there can be a UsefulLie. If flag-theory is useful, so what if it's not 100%. I doubt anything is 100% (unless you force it using definition games). At least it is not obtuse.
- Frankly, top, your experience in this field isn't broad enough for you to have seen a good example in your normal work, you refuse to seek one out (you arrogantly expect others to kowtow to you and do all the legwork), and you haven't even bothered to consider building one JUST TO PROVE YOURSELF WRONG. Anyone with sufficient skill could, and your damn theory is debunked. Gone. Kaput. And yet you still stick to your theory, playing the fool. You cherry-pick your evidence, you raise crackpot theories as though they are of equal merit to others by serious type-theorists. You are supremely arrogant; hell, even your damn name is arrogant: TopMind. Right. I see you as a person who is standing at the board above a concrete pool, looking at all the blue and shouting: "hey! I know water when I see it!", and still refusing to actually dive in and find out. And then, with even more arrogance, you declare "ALL OTHER THEORIES ARE AT LEAST AS WRONG AS MINE! SO THERE!" It isn't behavior I expect from an adult, much less a mathematician or scientist; do your damn math and PROVE those other theories "aren't 100%" before making unbacked claims. As far as abstractions: AllAbstractionsAbstract... the very notion that "AllAbstractionsLie" is, itself, a lie. "Composite Number" is an abstraction that over integers with certain properties, but that does not make it false or a 'lie'. "3 is a Composite Number", however, is a lie, and it is NOT wrong on a continuum - it is, indeed, 100% wrong. You're foolish to think otherwise, or to think that platitudes like "AllAbstractionsLie" actually mean anything simply because they have a dedicated page. Damn, I'll probably regret virtually SLAPPING you like this, later, but right now I think you deserve it.
- My turn at slapping. You are just a shitty articulator and are dumping on me because of your frustration at being a shitty articulator. If you were the mega-expert you suggest you are, you would be able to slam flag theory left and right with actual examples. Instead you hide behind words and insults. And yes, "topmind" is an arrogant handle, and I've even admitted that at TopMind (admission's been there for about a year). I used to use Tablizer on C2, but somebody locked it out in an EditWar, so I changed it.
- I'll admit that I'm horrible at specific examples. I think in terms of abstracts. That does make it difficult for me to explain things to you in a way you'll understand them... but it's also a consequence of becoming an expert in type theory (or any field of mathematics, really). Many mathematicians have the same problem. The fortunate ones manage to stay grounded and able to provide nice, concrete examples... the rest are like me, and have long since forgotten how much we didn't understand when we were still students, and just expect everyone else to have a similar level of understanding. I'm sorry I can't help you more. However, I'm not 'dumping on you' just because I'm unable to explain things to you. From my perspective, it looks like you are unwilling or unable to listen or learn, or even understand that your theory already HAS taken a very severe beating even though the reasons are beyond your current comprehension. That makes your (admittedly still arrogant) posturing and stances seem even more arrogant. I'll leave you to your fallacy, I suppose; it doesn't need to affect me. If you feel some urge to explore your own theory, I suggest whatever 'example' you construct simultaneously utilize both implicit typing and polymorphism or operator overloading.
- If its bad theory, then it almost must lead to bad results that are *simpler* than the theory. A bad bridge collapses and bad rocket theory results in rockets that stray off course, use too much fuel, and/or explode. It does not take rocket science expertise to relate to these bad results. Dead astronauts are dead astronauts. The problem with pure theory is that it does not have to be grounded to the real world. You can make up any little world you want as long as the rules don't contradict each other. However, that alone is often not enough to make something useful. The existing literature on Type Theory smells like it purchased a one-way ticket to the South Pole, and gave possession of this ticket to Mr. Parsimony and his wife, Mrs. Parsimony.
- The existing literature on Type Theory have already proven themselves in practice. Computer Science is more math than science, but, if you wish, you can test your hypothesis/'theory'. Build a type system. Start by randomly tagging value-expressions with "side-flags", then use them for something to make sense of the program.
- I don't have to. Most implementations of common languages already use side-flags if they have agreed-upon "types". As far as "proved itself", Where explicitly has it "proven itself"? Besides, a concept that is a technique for *using* types can indeed improve stuff. That I don't necessarily disagree with. But again, that may merely be "techniques for using/applying types", just like techniques for using/making rockets is a different issue, perhaps a sub-issue, to the definition for rockets themselves.
- After all, you must defend that your theory applies regardless of quality, which is one of your very significant statements - most definitions have types defined in terms of meeting particular properties that begit particular classification, and you'll need to avoid correlation with this if you're to find the reason whether or not your type-definition is leading to good or bad results. I expect you'll find some results simpler than your theory... like programs that crashes or flail wildly performing unexpected or unpredictable actions. Oh. Wait. That's right: your theory is independent of quality, right? so you'll argue that your theory is that there are techniques of higher quality that you could use to determine the type. Now, anyone else would look at this and say: then the technique, or whatever defines the technique, seems to be defining the type. You'll look at it and say: "but look, at some point, even if it is never stored, the technique returns some sort of indicator as to 'type', and that's a side-flag!". I disagree, but even if you can interpret the indicator as being a 'side-flag' (by playing definition games with "side-flag" so it no longer refers to something stored in a cell as stated above), 'TypesAreSideFlags' is still wrong. The only correct statement would be: the type-indicator (or side-flag) reflects the actual type.
- I am not sure what you mean by "so you'll argue that your theory is that there are techniques of higher quality that you could use to [quote]determine the type[unquote]". If parsing is used to determine the type instead of side flags, then it is arguable NOT "types", but parsing. (And I will agree there are grey areas in-between, it may not be a Boolean concept). If parsing is used to determine type, such as R = guessType(X), then R becomes the "side flag". But before this activity is performed, X does NOT have a "type".
Similarly, one wouldn't say that types are side flags, because that's only one way to implement them, it says nothing about the types themselves, and it provides no way to perform reasoning about types. It merely provides a way of identifying the type(s) to which a value belongs in a particular implementation. At best, you could say that "side flags" may be used to identify types, but not that "side flags"
are types.
I think you may be confusing type theory with particular implementations of type systems - which is a bit like confusing the field of automotive engineering with a particular model of car.
By focusing on objective reality instead of what happens in the writer's head, it may indeed look like "implementation". However, if we want at least a somewhat objective definition of "types", we *must* focus on implementation, otherwise the definition is not testable against instances to see if it matches since we cannot rip neurons out of people's heads to put them under microscopes (in most countries). I am looking for what is common in implementations of types, and I see flags. If there is some observable reality above implementation, please tell us what this is.
It's called an "abstraction", which is the observable - via symbols and/or diagrams and/or imagination - "reality above implementation" that is revealed via logic and intuition, and that allows us to perform provable reasoning on mathematical or empirical (e.g., implemented) structures.
Assuming "flag theory" is theory and not mere implementation detail, it would be helpful if you could indicate how "flag theory" works. In particular, what predictions or deductions does it allow us to make? As it stands, it would appear that "flag theory" is no more and no less than the following statement: Given a set of side flags T and a set of values V, for every v that is an element of V there exists an ordered pair (v, t) such that t is an element of T. Such a statement does not a theory make! At best, it could be an axiomatic starting point for exploring a type theory, but in itself it provides no predictive or deductive power. It tells us nothing about the types (oops, I mean, "side flags", because they're the same, aren't they?) themselves; how they may be constructed, distinguished, substituted, etc. So far, all "flag theory" says is that values have types. Well, we knew that anyway.
By the way, Top, please don't take this the wrong way, but would I be correct in assuming you've had little or no formal university-level education (or self-teaching thereof) in mathematics or computer science? It strikes me that many of these debates result from a fundamental gulf between you and your opponents' understanding of and appreciation for the role of mathematics and theory. I'm sure you're a kick-ass programmer - the issues you debate wouldn't be important to you if you weren't - but I get the impression that you turn your nose up at formal theory and mathematics as a defensive reaction to not having learned it. Reminds me a bit of some "old skool" auto mechanics I knew - sharp as tacks otherwise, and talented as hell in all the pragmatic spaces - who frequently hurled scathing abuse at mechanical engineering but would sometimes make fundamental blunders because of their lack of theoretical knowledge.
I have a degree in Computer Science and graduated at the top of my class. That being said, it was a regular State-U and they intentionally focused on practical issues instead of lots of theory. Also, I forgot most of the shit I learned there; and my minor was graphics/CADD and not compiler design (or "systems programming" as they called it). I pride myself in being a practical-minded person, and demand that academic approaches prove their worth on the street or at least semi-realistic simulations. Playing with symbols is not necessarily the same as making the world function better, and some people hide behind them because they are not practical minded. DrCodd's work is an example of an idea that can be expressed mostly in simple, strait-forward terms, but he was unfortunately not gifted in such presentation. It took others to do that part. -- top
Your degree may have been titled Computer Science, but it sounds like what has, in the past few years, been more commonly (and correctly) re-badged (at least in the UK) as Computing or Software Engineering. That explains much. Now then: I would appreciate an answer to the paragraph where I wrote, "Assuming "flag theory" is theory and not mere implementation detail, it would be helpful if you could indicate how "flag theory" works. ..."?
You seem to have a personal desire to paint me or view me in the worse possible light. If that is your goal, it is probably not possible for anything I do or say to change it. As far as "how it works", It is simple: it uses side flags. Types are side flags.
How is that painting or viewing you in the worst possible light??? You seem to do that all on your own. I'm simply, though indirectly, pointing out that you obviously don't have the background in TypeTheory to comment on it, let alone offer up a replacement. I'm not deprecating your Computing or Software Engineering background - these are legitimate academic subjects. Indeed, my own professional background is more in these than in formal computer science. Furthermore, I am not a type theorist. However, I have a great deal of respect for the work of type theorists and other theoreticians in computer science, and I respect the process that produces and validates such work because some of my work depends on it. You, in one sweep, try to offer up a supposed alternative with obviously not one iota of understanding of types, type systems, type theory, computer science, or the way theories are validated in general. The scary (and sad) thing is this: It's obvious, I am sure, to everyone reading this that you don't have a clue what you're on about. It's obvious to everyone, except, of course, you. The vast majority of computing specialists or software engineers, when encountering a field they don't yet understand, would make an attempt to understand it - at least enough to critique it at some appropriate level. It's abundantly obvious that you've made no such effort at all. To use an analogy: If we were all medical doctors instead of software engineers, programmers, and computer scientists, you'd be the guy who tries to convince us that muscles move not because of the formation of crossbridges between myosin and actin fibers etc., but because fleshy things pull on our bones. That would be the, uh, "Fleshy Things Theory," based on the observation that "Muscles are Fleshy Things". That's how ridiculous "types = side flags" looks to anyone with even a vague understanding of computer science.
Almost every other useful computer-science idea can be reduced to a simpler explanation. Perhaps the simpler explanation is not quite as thorough or lawyer-tight as the formal version, but at least it is reducible. That has yet to happen for types and it is not my fault. You admitted that "type theorists don't give a damn about regular IT joes" (paraphrased). I don't have to accuse you of ignoring a large audience of actual type users, you flat admitted it. Your "sin" is there for all to see. You are chewing me out for trying to remedy that without offering a remedy of your own. It makes you come across as a do-nothing arrogant prick (although I've encountered much much ruder people here, so I have to give your credit for only being mildly rude). You complain that I am trying to help a dying patient without having a medical degree, but you yourself would let the patient die. That isn't logical.
- (response from the one who doesn't give a damn about IT joe): Types have been reduced for you, top, and more than once. "Types are abstractions - groups of properties intrinsic and extrinsic." "Types (in a type system) are that which an agent must relevantly distinguish to correctly perform its role." If you want even simpler and less accurate (due to basis in representation), there are: "Types are sets", "Types are categories", and (my favorite) "Types are predicates". I do enjoy languages where types of variables are arbitrary predicates over the values they hold. Anyhow, these are all very small, and very simple. They all explain much in few words without contradicting modern type theory. But they still don't seem to fit into your narrow little world-view. That fault lies with you and your stubborn arrogance. You refuse to even attempt to truly comprehend any reduction of types except your own. And you've targeted the wrong person with the 'don't give a damn about regular IT joes'. That'd be me you're paraphrasing, but I didn't write the above paragraphs to which you are responding. That said, I agree with them. I think the person who wrote the above would suggest you read the damn medical texts so you can use real cures (rather than snake-oil) to help the patient. I, myself, would suggest that there is no analog to a dying patient in the field of type theory.
- Re: "Types are sets", "Types are categories" - I find that rather useless because ANYTHING can be viewed as sets or categories. True, equivalency is available in anything Turing Complete if you try hard enough. But, like I keep saying, the flag model seems to explain more about how languages, compilers/interpreters in particular, "deal with" types. If I were going to explain to somebody how languages such as CF, PHP, Java, C, etc. deal with types, the diagrams I would draw would have "side flags" (or lack of when comparing). This is the common thread of the mental model I use. (Most of these language would also use a flag look-up DAG/tree of sorts, but DAG-lookup is an implementation technique.) Drawing them as sets would be messy and confusing.
- What I think you've been doing is successfully using the flag model to explain how type-related mechanisms in certain languages are implemented. You've done nothing to explain what types are. As I've mentioned before, it is quite common to use a type flag (which is usually called a "tag") to indicate the predetermined type of a value. Had the title of this page been TypesCanBeIndicatedWithSideFlags, no one would object, because it would be accurate. However, that doesn't mean a type is a side flag, it only means a type is indicated by a side flag. Similarly, humans are usually indicated by their names, but that doesn't mean humans are their names. What a type actually is, is a more abstract issue; one that is by no means adequately resolved by merely re-terming "type" as "side flag".
- For once, I almost agree with your characterization of the flag model. It is indeed a kind of "mechanical model" that does not really get into the higher-level philosophy of what types "are". This is because the concepts often depend on person-specific psychology. It is not a good idea to dictate how to think on a general scale if possible because there may not be one right way to think at that level. Further, languages don't care about high-level psychology. Thus, I focused on how to understand the *details* of how languages deal with flags (minus DAG-hunt substitutions, etc. because that is even more specific). It is a middle ground between fluffy high-level philosophy and language-specific techniques. Note that typed languages don't have to use side-flags, for there may be other ways to achieve the same thing. However, usually they have a flag-based equivalent interpretation. It is a tool to understand or model how common languages work with types. It puts a face on the unseen workings of compilers/interpreters. If you want to rename this topic, how about ModelingTypesWithSideFlags?
- I'd be happier with the renaming. Don't say that types are flags or call it "Flag theory", and it'd sound far more practical and far less presumptuous. You could even call it TrackingTypesWithSideFlags?, which would be closer to what it seems you're really trying to do. I disagree with most else you say. Whether you like it or not, it's provable that thinking must ultimately follow certain 'correct' search paths to reach certain 'correct' conclusions within certain (well-defined) problems/puzzles... which are often studied in AI / Theorem Proving.
- Yes, but even flawed type systems/engines are *still* type systems/engines. It would be redundant for me to complain about trying to hard-wire quality into the definition. You already know where I stand on that. I agree that "TypeTheory" is a good quality-assurance technique but I still disagree that it is about the *essence* of types. Perhaps none of us has capture the real essence. You seem focused on accurate theorem proving and I seem focused on describing how interpreters/compilers (I/C's) process language statements. The real essence of "types" is not about accuracy and consistency (because psychology is not necessarily accurate or consistent) and not about building I/C's, which limits the concept to I/C's.
- You first have to qualify as a type-system/engine to qualify as a flawed one, of course, so your first statement is a tautology. A TypeSystem or implementation might be shoddy or incapable of providing the exact support the programmer needs, but that doesn't change the nature of the types themselves. Your focus on implementations is a lot like saying: "Relational sucks because SQL is awful". Going on to talk about common interpreters/compilers is like saying: "Relational is bad because PHP's most popular implementation of SQL support lacks parameterized queries". It's foolish, but that foolishness simply wouldn't be recognized by someone who honestly thought they knew what they were talking about. You're looking in completely the wrong direction to even begin to understand types.
- You are the one concerned about "flawed type systems", not me. That is what I was complaining about: your bloated definition marries itself to too much. And, I am NOT focusing on implementation. Flag Theory is a model, not necessarily an implementation.
- And, as to your final statements: you don't know jack about the "true essence of types". The real essence of types is about distinguishing things, which is necessarily done based on properties. One can distinguish integers from real numbers, natural numbers from integers, prime numbers from natural numbers, and integers between 1 and 10 from the set of all integers. One can distinguish apples from bananas, fruits from vegetables, plants from animals, rockets from chairs. ALL of these things are 'types', and different programming languages provide for different subsets of this set of all possible types. Anyhow, the only reason to distinguish things is so you can act differently based on what you've distinguished, whether that be to throw an error, dispatch to different functions, choose different branch, alter plans, etc.. (Thus: types are that which an agent must relevantly distinguish to correctly fulfill its role). Perhaps you can't be convinced of this because you hate to be wrong and you hate even more for others to be right, so you keep repeating crap that essentially says: "well, if I'm wrong, then you must be too." But this is truth. If you don't want to accept it, I can't make you. Nonetheless, there it is.
- (moved reply below under "classification versus types")
- There is, indeed, only one right way of thinking for certain puzzles, yet often there exist abundant ways to arrive at a wrong solution, and (since brute-force is too slow) the problem in fields dealing with these is efficiently sorting the former from the latter. Formal languages are a fine example. Languages might not care about psychology, but they do force certain ways of thinking (search paths in lexer and parser, mechanisms for evaluation and execution) if you're to understand or obey them (or interpret them, or compile them). You might not like it, but, know what? the languages really don't care. Type theory, especially as applied over formal languages, is under similar constraints. Your ideals and aspirations to the contrary are blasted away by reality.
- At the language level, but such assumptions or philosophies may not be global to all types. As far as a title, how about TypesAndSideFlags?. "Tracking" is too specific IMO.
- TypesAndSideFlags is fine. And don't confuse "one right way of thinking" with "one truth". Even if a global truth can be reached by many 'right' ways of thinking, it's still the same, global truth. You can get to some truths over all types by starting in any of the accepted bases for types (sets, categories, predicates). You can get to other truths by focusing on communications and use of types and what means 'action', which also has many common bases (of which I favor the Actor model). These rabbit-holes are deep, but they only lead to one place. (...and you might lose your head...).
- I think the person who wrote the above would suggest you read the damn medical texts so you can use real cures (rather than snake-oil) to help the patient. As the person who wrote the above, I can confirm that you are correct. That is exactly what I would suggest. Furthermore, I would strongly discourage the further inappropriate application of leeches, er, I mean, "flag theory" as a cure-all for type-related illnesses. :)
- The "real cures" require a large staff and expensive equipment and is thus not practical in the village. The so-called "snake oil" is better than zilch to the village.
- Oh, please... Now you're just quibbling. As noted above, there are various simple, usable definitions of "type" that have actual explanatory value, and have more substance than merely introducing a synonym for "type". So, there are simple cures that are better than your "snake oil." "Flag theory" may be better than nothing (though I'm not sure how), but we don't have to settle for a mere "better than zilch," do we?
Yes, of course every useful computer science idea can be reduced to a simpler explanation. The accepted "simpler explanation" for "type" is that a type is a specification of a set of values and the operations that can be performed on those values. Your "flag theory" simpler explanation is that a "type" is a "side flag", with no further explanation. Thus, "type" and "side flag" are mere synonyms. It's not a simpler explanation, it's a non-explanation. By the way, there are at least two unsigned authors at work here - I did not admit that "type theorists don't give a damn about regular IT joes." That was someone else. And I'm not being rude in pointing out that as a software engineer and programmer who is not a type theorist - i.e., your stated audience - your "theory" is useless. Instead of reacting defensively, perhaps it would be worth reconsidering your theory, or at least expanding on it, for the sake of people like me in your intended audience.
As for your "how it works," it suggests that "flag theory" is as useless as my "diesel theory": Q: How does a Diesel engine work? A: It uses diesel fuel. Such an answer is utterly trivial, incomplete, unsatisfying, non-explanatory, and useless for any practical or theoretical purpose. So is "flag theory."
This is a poor analogy because it does NOT say "types are things that use types".
You're right, it is a poor analogy. I shall make a better one: Your explanation for "how it works" is as follows - "It is simple: it uses side flags. Types are side flags." Thus, a better analogy is to say "Diesel engine theory is simple: it uses a motor. Diesel engines are motors."
Inferred Types
Some suggest that flag theory does not take into account inferred types. For example, in a dynamic language we may do something like this:
var a, b;
...
a = 32.8;
b = 'my fleas have dogs';
Here the program "knows" that 32.8 is a number by using a parse tree. However, actually the quotes are the "side flag" here, and lack of quotes (lack of explicit flag) is also a kind of "flag", sort of like a Null. Lack of quotes *indicates* something. Similarly, usually there are other indicators that indicate octal, hex, etc. Internally, a number will still have a flag. The lack of an indicator in the language text for decimal numbers is merely a shorthand for a flag to save the humans time.
Programming languages are generally a hybrid between natural (spoken) languages and formal or computer languages. Thus, they inherit some of the Obfuscation of natural languages. But for the most part, type indicators are there, just not in a "clean" way. If it starts with digits 1-9 it is assumed an integer, unless it has a decimal, then it is Real. If it starts with zero it is assumed octal; if it starts with an 'x' it is assumed to be a hex number; if it starts with a quote, assume string; if it starts with A-to-Z, assume it is a variable name, etc. The first non-blank character and potential decimal serve as the "side flags". A clean/formal version of the same thing would have explicit flags and the run-time-engine or compiler usually explicitly has a flag so that it doesn't have to keep parsing over and over (which type-free languages often have to do).
- You're once again confusing syntactic indicators with types. The way you're saying it, numbers expressed in octal and hex are different types of numbers than those expressed in decimal... after all, they have different "side-flags" and "TypesAreSideFlags". You're going to have to get past that hurdle in your conception at some point, or at least your expression of it. You haven't even approached what the type-flag 'Number' is coming from. It seems you're arguing that 'Number' is given due to the (apparently arbitrary) rules in the parser. In your theory, is there any reason the parser -should- assign 'Number' instead of 'String' or 'Prime Integer' to 32.8?
- You are correct that hex and octal "indicators" are usually not retained as flags, but the "numbers" are converted directly to binary values internally. A sloppy error on my part. (Although I think I know an exception...) But it does not change the basic argument. As far as what the compiler assigns to what, it is based on the rules of the textual side of the language. Compilers/interpreters generally convert a human-friendly form into a more machine-friendly form of the program (source). The most important part of this is "atomizing" the items in the text; that is finding the boundaries of the parts and packaging them in an easy-to-process form (from the machine's perspective).
- Slight alteration: although quotes around strings can be considered "flags", the decimal place should not because it is part of the value, and using it for type determination is "parsing", per above.
- Are you forgetting that decimal representation of a real or floating point number is just a representation, and not the value so represented? One could just as easily represent the same number as a fraction (164/5). The decimal place should not be considered part of the value.
There are languages that don't have flags and use situational context. But basically the operations are "on their own" in those languages. They parse to validate "types" for local usage (if we can even use the word "type"). An operator is more or less saying, "I've been requested to treat this as a number, let's parse it to make sure I can." They are called type-free languages because they don't have flags. Sometimes there are flags to indicate the difference between scalars and structures, but not between different kinds of scalars. Such languages are type-free at the scalar level, but not at the variable level. IINM, Perl is like this. So is
ColdFusion. The difference between
ColdFusion (CF) and PHP can be felt because in PHP sometimes the "type flag" ends up producing unexpected results and you have to keep it in mind. In CF, if you do a "print(x)" (or its equiv.) and the result is "123", you always know that any other scalar that also produces "123" will be equivalent to x at that point. However, this is not the case with PHP because its internal type flag may say it's a string or a number or whatnot. To test for equivalency, you have to also check its type in PHP. There is no test for scalar types in CF, except parsability. For example, isNumber() will tell you if something passes parsing as a number. It tells you how it can be interpreted, not what it "is".
Your example is hardly an interesting one. You didn't even touch such things as functions with implicit input and return types as you'd typically find in Haskell or ML. You focused 100% on the use of syntactic clues in identifying types of tokens, and didn't even start expanding into how the token affects the apparent type of 'a'. You should experience some implicitly and -strongly- typed languages (which are certainly NOT "called type-free languages"). You probably need more experience with a wide variety of -statically- typed languages, period, before you start coming up with your own pet theories. The most advanced research in typing is all in statically typed languages.
If it works well for languages that people actually use for pay instead of MentalMasturbation languages, then I have achieved a worthy goal: finding a coherent definition of "types" for the IT masses, and have no gaddam reason to be ashamed of that! Thank You, Thank You. -- top
- lots of people used visual basic 6 for pay, and it has now died. Just because something is used by the masses (VB6) doesn't mean it has a future. Your potential Table Oriented Programming languages that you've come up with are just mental masturbation too, no one is using those specific languages for pay. In fact people are using object oriented languages for pay moreso than any top created language.
- The nature of my argument here does not depend on the popularity of my favorite languages/tools.
- But you like to call other people mental masturbators. It's NameCalling?. In fact you are a mental masturbator every time you think about a new top language that accesses tables easier than current languages... No one uses your top language, so it is only masturbation on your part, but some ideas from that language (and TutorialDee) might be useful in the future for implementing a real language. The point is: stop namecalling and telling others that they are mental masturbators, because you are one too if there is such a thing.
- Is this somehow connected to THIS topic? Use ObjectiveEvidenceAgainstTopDiscussion if you want to rant about my debate style in general.
[Aye, but therein lies the problem... As I wrote above, I can't see how your "side flag theory" works at all. A theory is a predictive or deductive tool. What can we deduce from your theory? What does it predict? Your definition of types aka "side flag theory" appears to say nothing more than "values have types." If types are side flags, and every value has a side flag, then every value has a type. This is a trivial result, and it does nothing to explain what a type
is or how types work in a given type system. This is your side flag theory:
Given a set of side flags T and a set of values V, for every v that is an element of V there exists an ordered pair (v, t) such that t is an element of T. What can we use it for, so that we can say it "works well"?]
I've given examples of languages that don't have type flags and examples where type-flags are discarded by the compiler. Things with type flags behave different than things without. By thy fruits ye shall judge thee. The difference between PHP and CF's scalars are an example. As far as "works well", that is not the goal of this or most other comp.sci definitions, as already stated. I am attempting to define the existence of types, not necessarily quality. Perhaps the prior attempts bit off too much, and that is why they created a cryptic mess that most practitioners ignore and only a mathematician's mother can love. It makes sense to break stuff into pieces when they become too complex, and I am attempting to do this by separating existence and quality. If you want to continue on the quality path, then please justify it in a separate section first.
Nobody is arguing that type-flags don't allow for easy runtime type dispatching, and that this is a "different behavior"... but if that is your justification that there are different types, then I'd say we're closer to an agreement. After all, your words here can be restated as defining types as being that which elicits different behavior. You're thinking too narrowly with type-flags in particular. It'd be just as correct in CF to describe types in terms of: "can this parse as an integer or not?" or "can this parse as a list or not?", etc. As far as "works well", you're theory is ultimately valueless if it doesn't, much like any other crackpot theory. You keep demanding others "justify their paths" without first making a convincing justification of your own. Written that essay yet?
There seems to be some confusion here. CF does not have "types", at least on the scalar level, because it has no scalar-level flag. Scalar variables act as if they are stored as strings and are parsed as integers or dates if operators require that. It does act as if it has a type flag to distinguish between scalars, arrays, etc. however. Thus, variables do have a type flag in CF, but it cannot (without hacking) be used for scalar identification (number, date, string, etc.)
The type of scalars in CF are strings. The type of value the scalar represents depends on what sorts of things you can parse from it. If CF expects a string carrying a date, but gets a string carrying a bunch of garbage, this would be a typing error. No confusion here.
I would label that a "parsing error". Not necessarily a "typing error". If all scalars are (or act like) "strings", then the type flag for scalars is useless beyond knowing that something is a scalar. It no longer affects the outcome at the scalar level and thus might as well not exist, and may not. The interpreter does not need to know that info and makes no use of it. Observers of the language could at best conclude that "scalar variables act like a list of bytes on which the built-in operators for dates and numbers parse as strings of digits and dividers (punctuation)". The built-in operators do not appear to act on anything resembling a side type flag, and thus types don't exist in the language on the scalar level. No Flags == No Types. One is looking at the value and only the value.
If the intent of the programmer is that a certain function or procedure receive a string that represents a date, and said function or procedure receives a string that cannot be parsed as a date, that is a TypeSafety error, especially so if the function or procedure becomes undefined at that point. It is also a 'Type' error in the sense that you sent the wrong 'type' of scalar to the function or procedure. Whether the scalar parses as a date certainly does affect the outcome. It might do you well to recall that I reject your theory and definition that TypesAreSideFlags; that this makes the "flag for scalar useless beyond knowing whether something is a scalar" is something I believe of no relevance at all; you cannot use your definition to defend your definition, so stop trying. A sufficiently advanced TypeChecker? could guarantee the scalar is a string representing a date before passing it to the function... be it statically or dynamically... and it could probably do so without modifying the language.
Just because YOU call it a "type safety" error does not make it so. I call it a validation or parsing error. So there! (Although any error can perhaps be called a type-safety error if you convert or view EverythingIsa "type", which is certainly possible, but one can do that with many of the EverythingIsa items. Perhaps your definition is too wide to be useful.) Tell me, what is the precise difference between "validation" and "parsing" and type safety? Flag theory has an easier answer: if it involves checking a flag, then it is a type-related error; if it involves only parsing/analyzing the value itself, then it is not. Utility in simplicity.
And, I don't see any circular defining on my part.
It's a TypeSafety error by the definition of TypeSafety: any operation that results in an 'undefined' operation is a TypeSafety error. If you have a "parse error" and you don't know what to do next... i.e. the 'next' thing you do is undefined... then you've just experienced a human TypeSafety error. Period. It is possible to prevent all TypeSafety errors by making sure that every operation is well defined, but this doesn't make sense in all domains (e.g. assignment of a public-encryption-key to a variable intended to hold a date/time). Dynamic Typed runtimes provide TypeSafety by aborting gracefully (e.g. with runtime exceptions) before performing undefined operations. Statically Typed compile-times provide TypeSafety by mechanism of proof. TypeSafety is one form of validation, but can only be applied to invariant properties. TypeSafety CANNOT be applied to inputs (since you can't guarantee that input will be correctly typed... that's completely outside the control of the local program).
The definition of "type safety" should depend on the definition of "type", which we are not done with yet. Otherwise, you are attempting to use circular reasoning. Any "Undefined error" is too wide to be useful anyhow. Any look-up process in any structure that cannot find any match could be called an "undefined error" and thus a type safety error by that reckoning. "Employee number 1234 not found in database" is a type safety error? Not! As far as "what to do next", handling such situation should be up to the language or app. It may crash, display an error message, log the problem to a file and move on to the next input record, etc.
- The definition of TypeSafety has never been associated directly with the definition of 'type'. It is, rather, associated with the practice of types. The idea is that use of type-systems to validate programs will guarantee this property called "TypeSafety", and the name 'TypeSafety' was received in that context. There are other valuable properties that some type systems hope to guarantee, such as "Progress". That the name 'TypeSafety' happens to include the word 'Type' in it does not mean there is a dependency on the definition of 'Type'. Consider it a defined phrase, if you must.
- And it is not "undefined error", it is "undefined operation". An operation is "undefined" in a particular language when there is no evaluation or execution rule for it - no definition on how to perform the 'next step'. If you've defined what should be done, such as throwing an exception, then you have, indeed, defined what to do next... you have TypeSafety again. However, the TypeSafety provided explicitly by the programmer does not constitute TypeSafety provided implicitly within the language, which is what the TypeSafety of languages is measured by.
- As far as structure lookups: it is very common that lookups in a 'struct' or 'record' are handled as TypeSafety errors by modern languages. Those are structure lookups. Give that some thought. For the more generic data-structure 'map', the more common solutions are to return a special 'not-found' indicator, throw a runtime exception, or to add it to the map with a default value. Give some thought to what it would mean for none of these options or any other to be defined. Should the program crash? or log an error message? or send hatemail to the U.S. President? If it is undefined, it could be any or all of these.
And "Flag Theory" may provide easy answers, but it is also snake-oil... it doesn't do anything for you... it just says: "flags appear! Hallelujah! It's a miracle! flags came from God and they just happen to represent something, but definitely not this 'Properties' concept that the heretics say, and you can check these flags, and supposedly they hold meaning (but that meaning definitely isn't necessarily properties, nosiree!)." It's utterly useless. Utility in simplicity, indeed.
It's been a useful mental model for me and has predictive value. If you don't find it useful, then so be it. You appear to be trying to stuff too much into the definition of "types", and are surprised or disappointed when others don't do the same. Your stuffage has made a bloated mess and greatly narrowed the audience. I bring K.I.S.S.. And as far as "they appear!", the definition of "rocket" does NOT depend on the origin of the rocket and it would be dumb to do otherwise. Flag theory does not need to put the source of the flags in the definition. It does not describe how to build a type engine anymore than the definition of "rocket" tells you how to build a useful rocket. Those are specific techniques, and should be below the level of definition.
I don't put much at all into the definition of "types": just the idea of 'properties', both intrinsic and extrinsic. That isn't very much for me at least... I don't see how YOU find it so damn overwhelming. Etymology and origin of 'rocket' are, I agree, an independent issue from the concept represented in the definition of 'rocket'. However, your Flag Theory doesn't even make this 'definition' matter. One can flag ANYTHING with 'rocket' and it's okay in your Flag Theory BECAUSE you made it so damn 'simple' that 'rocket' isn't even required to refer to a definition. Flag theory is naive, simplistic, and useless, not 'simple'. As Simple As Possible, But No Simpler.... you've violated that last clause.
You keep saying it doesn't matter, but that does not make it so. Perhaps you just don't "get it". I don't know yet where you are getting mentally stuck. You seem to require that the definition define how types are actually mechanically checked, which I still claim is outside the scope of a most definitions.
My statements make no claim about how types are mechanically checked. Types and TypeChecking are two different things. It'd be more accurate to say that types must be capable of being checked. How or whether an implementation bothers checking them is up to the implementation, and which types the implementation can recognize and check is dependent upon the language and type-system. Those are implementation details.
Programming Versus Math (from "Parsing versus Types")
Your functional viewpoint brings about some sticky philosophical observer problems in which one may know by mathematical proof some calculation results ahead of actual "running" of the program. However, in practice even functional programs must be "ran", and at the start, "r" has not been calculated yet. For now, I will limit the definition scope to programming and *exclude* math. "r" may always have a value in a mathematical sense, but not in a programming sense.
You can't exclude math from programming. Half of programming is describing calculation, and the other half is describing communication. That's what computation is all about. And, by my own thoughts at least, 'x' has types long before the program is ever run (the type of item read from the file), and 'x' must have some fixed properties at least if 'guessType' is to be able to distinguish one 'x' from another. 'r' also has an invariant type defined by its properties (the type of 'r' being describable as 'a type-descriptor for 'x' based on its intrinsic properties, represented as a string'.)
Perhaps, but that does not necessarily mean they are the same thing.
They aren't the same thing. But they aren't divisible.
If they are not the same thing, then you cannot automatically assume that every property of math also applies to programming, even if they are related in some ways.
I do not make such an assumption. My statements on the deterministic nature of types does not rely upon programming having "every property of math". It relies only upon programming having certain properties of math. In the case of functional 'guessType', said "functional" property is the only property that is required... and it need only apply in exactly the case of the 'guessType' function. No other property is needed.
If and when "r" has a specific value, then a type flag can be said to exist. Make from that what you want. (Although, it is not part of the compiler/interpreter type system, but is rather a roll-your-own type system.)
It doesn't matter whether a 'type flag can be said to exist'. The value 'r' will possess is determined before 'r' is ever calculated. That's what I make of it. And since it doesn't matter, one can reject the notion that TypesAreSideFlags.
Huh? How is it determined before it is "calculated"? What is doing the determination, and how is it avoiding calculation? Prayer?
The value 'r' will possess is determined the moment 'x' was set. It just isn't known. You're confusing the words.
Determinable or determined? Those are two different issues. Every fricken program may be pre-determinable if something solves hard enough. But that is not useful information. If it's determinable to God but not mortals, then it matters not to mortals, and I am looking for a def useful to mortals. To me, you almost seem to be playing games, moving back and forth from concrete to philosophical as needed to dodge the issue. From a practical programmer's standpoint, until the MACHINE calculates "r", it is not considered "known".
Determined, adjective-form. Similar to 'determinable', except 'determined' is 100% and not dependent upon any contingency (like future action or sensory input) - we tend to call something 'determinable' even if it involves going out with a tape-measure. And you're wrong: not every program is determinable, even theoretically. I mentioned this above. There is a theoretical constraint: if you describe a big black box around a program, and draw lines going in for each of the inputs and lines going out for each of the outputs, it is not theoretically possible to determine the outputs before we first determine all the inputs. Now, God might or might not be able to determine the inputs. More relevantly, type-checking necessarily involves a computation that is limited over some finite 'box', cannot see into other 'black-boxes', and cannot (even theoretically) determine inputs or their timings.
And I do agree: until the machine calculates 'r', it is not "known". It's just determined. It would not necessarily be determined if 'guessType' was not functional, though. There is that caveat.
The existence of possible alternative special-case shortcuts/shorthand shouldn't really matter if its only special cases (incidental or coincidental).
I'm missing your context, here. Special cases for what? There are a lot of possible referents in the above two paragraphs.
Classification Versus Types (continued from above discussion)
And, as to your final statements: you don't know jack about the "true essence of types". The real essence of types is about distinguishing things, which is necessarily done based on properties. One can distinguish integers from real numbers, natural numbers from integers, prime numbers from natural numbers, and integers between 1 and 10 from the set of all integers. One can distinguish apples from bananas, fruits from vegetables, plants from animals, rockets from chairs. ALL of these things are 'types', and different programming languages provide for different subsets of this set of all possible types. Anyhow, the only reason to distinguish things is so you can act differently based on what you've distinguished, whether that be to throw an error, dispatch to different functions, choose different branch, alter plans, etc.. (Thus: types are that which an agent must relevantly distinguish to correctly fulfill its role). Perhaps you can't be convinced of this because you hate to be wrong and you hate even more for others to be right, so you keep repeating crap that essentially says: "well, if I'm wrong, then you must be too." But this is truth. If you don't want to accept it, I can't make you. Nonetheless, there it is.
Please tell me what the precise difference is between "types" and "classifications and categories"? What people call "types" and what they call categories tend to be different. If they are the same, then let's roll them all up together. My observation is that what is called "types" tend to be "closer" to the variables or values they are used with, while classification systems focus more on labeling real-world nouns with limited regard to how they affect computation or programs. Classification is more of a global endeavor, while types are more of a programming aide, a programming tool, and/or a temporary situational aspect. This is one of the reasons I focus on I/C's with the def. (True, sometimes they are inter-related, and OO has sparked interest in dispatching based on global classifications {often a bad idea in my opinion}, but in general they are viewed differently.)
'classifications' and 'kinds' tend to refer to types of objects and types of types respectively, but they are still simply distinguishing 'things' based on arbitrary properties, and thus have all the properties of themselves being 'types'. When I speak of 'Category' in particular, I'm usually restricting the word to mean exactly 'Category' as it applies in 'CategoryTheory'. One can represent types in terms of categories much like one can represent types with sets or predicates, but that's a representation layer... the thing represented, the 'type', is not the representation.
On the apparent association between types and program-'variables': TypeTheory is not tied to programming except in the sense that programming is its #1 best application. The word 'type' doesn't merely refer to a programming aide. Rather it is the inverse: programming languages oft tie themselves to TypeTheory by implementing certain type systems in order to provide certain guarantees of 'safety' or 'progress' or 'correctness, or even just to a more advanced syntax (e.g. with polymorphic functions and type-dispatching and implicit conversions). One mechanism programs use to associate components with types is to ascribe a type to a cell or variable. The manifest ascription becomes a contract between the programmer and the language-compiler/interpreter saying that any value or object placed into that cell or variable must and will possess the ascribed type (possibly forcing an implicit conversion) and thus all the properties associated with that type. Knowing these properties, having this guarantee, makes reasoning about program correctness much easier. (It's useful to recognize for your modeling that it is the value or object referenced by the variable that actually meets or 'possesses' the type, not the variable itself. The variable itself already has a type: 'variable' or 'cell'.) Types of variables aren't quite so useful in languages that attempt to KillMutableState, or at least marginalize it, like Haskell; there is less need to make the guarantees at that level when nothing is going to ever change after the initial assignment. One could, for example, use assertions to represent types in such a language as an alternative to even once attaching a type to a variable.
Programs can also utilize TypeTheory to reason about program correctness, often representing types as Hoare-Logic style contracts that guarantee that a particular block of code has side-effects with particular properties. This sort of technique exists, for example, in Erlang, though it hasn't been explored deeply. Many of the type-theory books I'm reading focus on such things as structural types, which deal with resource management: guaranteeing that an file-object meets the extrinsic property of never being used after a 'close' operation, and seeking properties over regions of memory to learn such information as to whether memory can be freed immediately rather than waiting on the garbage-collector, etc. Suffice to say: the use of Types in programming already extends far beyond the narrow application of 'variable', and TypeTheory attempts to push it further yet.
But TypeTheory itself is more than just 'Type', which is the relatively simple idea over which 'TypeTheory' chooses to reason. And the concept represented by 'type' IS simple. Types are described in terms of their necessary properties. TypesAreAbstractionsOfReality?. Types in any system are, in the end, those things capable of being distinguished which one chooses to relevantly distinguish.
This does not really address the observations of "in practice" usage raised above. You could argue that the "in practice" usage is "wrong", but definitions are shaped by actual usage at least as much as an authority, for good or bad.
Concepts transcend definitions. A definition attempts to capture a concept, but doesn't always succeed. If you wish to understand types, you must understand them as a concept, by their form and function and nature... not merely as a sequence of English words glued together. You seem to think that operational definitions for a concept ARE the concept. For better or worse, this isn't true.
And "in practice" isn't a foundation for theory. "In practice", there are always engineering constraints. For example, "in practice", Relational in its defined form is barely alive: I've rarely seen a so-called RDBMS that enforces finite sets, much less supports transfinite and uncountable sets. Sort of hurts the whole 'a relation is a set' thing. Nonetheless, it would not be correct to say that "Relational" is not capable of or does not encompass these things simply because you don't see them "in practice".
Similarly, the fruits of the frontiers in TypeTheory mentioned above can currently be found in various toy/research languages. Admittedly they haven't entered mainstream languages or common practice, and probably will not for very many years - due to codebase inertia, it's very difficult for new languages to compete with the behemoths, much less gain the needed man-hours on optimizing compilers. The fact that most of the advanced type-systems are computationally undecidable in some cases doesn't help either. Nonetheless, it would not be correct to say that types do not envelope these things, that they are strictly limited to their apparent use "in practice".
They are decent *implementation* techniques. No argument there. But implementation is not necessarily definition.
Anyhow, while not everything I listed directly pertained to practice, much of it still applies even in common languages (e.g. distinguishing type ascribed to a reference (e.g. variable name or identifier) from the type of the referent (the object or value so referenced)). Don't dismiss it all simply because some things are more airy than you'd prefer.
It does not fit common usage of "types". It is a legitimate complaint. Volume of writing and volume of implementation details by itself does not qualify something as a definition. It's not a good definition. "Types" usually refer to something narrow or context-specific, and is thus NOT interchangeable with "classification system", which often can and should handle multiple overlapping views or classification systems.
One can talk of types in Haskell, or types in C++, or dynamic types in Ruby. However, if one is to well define 'type' in the context of programming languages, one needs to speak of the word's usage both across the programming languages and within the field of type theory, not just regarding programming languages popular among programmers whom each know only a handful, generally of the most popular languages. Your assertion about the limited extent on types comes largely from your lack of personal, anecdotal, and vicarious experience with the less popular programming languages possessing type-systems that provide greater extent.
The type-flag model works on them also. The difference is implementation details. Further, definitions *are* shaped by common experience. A weird niche or lab language may do interesting things with types, but that does not mean it gets to influence a definition.
- (Hefts up cubicle wall. SLAM! ... SLAM! SLAM! SLAM! Huff... huff... huff... SLAM! SLAM!) I am hitting the person who asked for it, right?
- Informal definitions are shaped by informal experience and informal study. They do -shape- the formal definitions, of course, but (like it or not) it's those niche and lab languages that fill the corner-cases and provide for crisp, formal definitions and models. That's science for you.
- Formal definitions can be bad also. Just because its formal does not necessarily make it good or right. To quote MisuseOfMath: "the quantity of things that can be surmised or derived based on root axioms alone does not necessarily reflect the quality of the base axioms themselves".
- [How do you know whether the formal definitions in type theory are good or bad, when you don't understand them?]
- If you cannot explain it in with clear English and some examples, then it's likely farked up. People were using types long before the so called "type theory", and they didn't have to do mental quantum theory to create and use them.
- People knew that different types of fruit (apples and oranges) were different types of fruit, before knowing anything about "side flags". They didn't need "side flags" to understand types, and they won't need side flags ever, to understand types. TypesAreNotSideFlags.
- Side flags/tags was meant for the domain/scope of programming languages, not everything. And just because people have a notion of something doesn't make it a clear or objective concept. Existence of a concept is not the same as the existence of rigor of a concept.
- But what came before programming is a common understanding of what "types" were. People knew what types were when we looked at a fruit bowl and saw different types of fruit. When we became programmers we took this concept of types and applied it to programming. Types in programming didn't just pop out of no where. It's not like someone invented "type" only after programming existed, types existed way before programming came to be. Do you even understand what a type is in the real world? Would you mark off apples and oranges with different flags? Aren't the flags just the words we use to describe apples and oranges? This is an apple with an apple flag, this is an orange with an orange flag. Flag is the word describing the fruit.
[Ah yes, the "type-flag model," that comprehensive and eloquent description of type systems that explains everything in a single sentence: "Values have types." Brilliant insight, that.]
Incorrect. Values do not necessarily have "types". Types are separate from the value(s). And, the alternative definitions stink even more, partly because they resemble implementation techniques. In other words, they focus on "how" instead of "what". DontComplainWithoutAlternatives.
Technically, you can't bridge the concepts of signal and value (fundamentals of communication and calculation) without the concepts of types and representation. And values do have types... e.g. three is odd, prime, integral, natural, and between one and ten. However, one might reasonably argue that these types aren't relevant or 'extant' until they can be found as represented within a type-system.
As far as alternatives: stop saying shit that ain't true, top. You don't understand the alternatives, you don't WANT to understand the alternatives, and your personal 'alternative' is the one associated far more closely with 'implementation' details than any formal type model I've ever worked with. You've blinded yourself into thinking it actually constitutes a type theory - that you're doing something right, therefore everyone else must be wrong. I've pointed you towards real theory, but you don't care to look. I've offered you explanation, but you don't want to hear. OF COURSE your mind won't be changed. I can't help you anymore. Your theory or definition of types is completely unconvincing to an expert in this field, and I doubt that any incremental modifications you make will change that.
The flag model works in most cases that one actually encounters via more or less random sampling. Academic elitists don't always get to control definitions. I'm just the messenger.
[How can the flag model be claimed to work "in most cases" when the "flag model" has no theorems or methods by which we can test it on even a single case? It consists of a single definition, that types are side flags. Then it ends. We apply this so-called model... How???]
- What is the pass/fail (or score) algorithm for testing the other definitions? I keep asking for it, but I only get vague words. DontComplainWithoutAlternatives.
- A definition is a description of a concept represented by one word via use of other words. A definition is valid if it allows for both correct identification and modeling of concept-instances (or, more loosely, analysis and synthesis). The requirements on identification necessitate that a definition covers neither more nor less than the concept... i.e. "a dog is an animal" is wrong because some animals aren't dogs. Similarly, "a dog is a Golden Retriever, a Bloodhound, or a Chihuahua" is wrong because people classify as "dog" many things that they don't classify as retrievers, bloodhounds, or chihuahuas. And, finally, "types are side flags" is wrong because not all side-flags are types and because the vast majority of things people in our field call 'types' have any association at all with 'side-flags'. Here's a little extra for you: A definition has the property of being vague insofar as there exist borderline cases in identification. A definition has the property of being ambiguous insofar as it might describe more than one referent (concept) based on varying interpretation of its constituent words. A formal definition will neither be vague nor ambiguous when applied within its intended domain. Any theoretical definition ought be a formal one, for the properties of vagueness and ambiguity make difficult the deriving of information - such information will often be even more vague and even more ambiguous. Understand it this time, and study what you must elsewhere on this wiki and beyond it... it is not our job to spoon-feed you knowledge.
- Where is the Perfect Definition then? Sure, if you call X a foo and Y a bar, then you may be able to do nice foo-bar math with them, but it's the assignment (interpretation) that requires subjective interpretation.
- Objective interpretation is quite capable of doing the job. As far as "perfect" definition: for me, this would mean any definition that is both formal (non-ambiguous, non-vague) and as simple as possible but no simpler (covering every case, including the corner cases, without covering more than the concept demands). There are many such definitions... and those least likely to raise objection from someone as contrary as you are those that can be found in fields you've actually studied: e.g. the formal definition of 'Graph', the definitions for 'sphere' and 'cube' and 'prism', etc.
- "Objective" and "interpretation" tend to be at odds. In practice, no existing cube is a perfect cube. It can only be identified by being within established parameters. And with graphs, there is often room for disagreement about what is part of the node and what is external to a node. People tend to have a favorite interpretation/translation pattern, all of which may be essentially equivalent.
- ''I, personally, have yet to even figure out how to write a subjective interpreter. I haven't even tried. I fail to see how these are at odds. And in practice, there are plenty of perfect cubes... you just need to stop limiting yourself to the practice of crafts.
- Any TypeSystem used for purpose of TypeSafety, dispatching, or ANY purpose at all will need to have formal definitions for every type ever used. These will need to fulfill every property of formal definitions: not vague, not ambiguous, not provides correct identification, and provides correct modeling.
- Yes yes, but that is an IMPLEMENTATION DETAIL. You hard-wired implementation specifics and techniques into your def, making it a fat bloated mess. A definition of "rocket" should NOT dictate how to stabilize rockets.
- You're right that a definition of "rocket" should NOT dictate how to stabilize rockets. I never implied it did; any definition of "rocket" should also cover the unstable ones. However, you've also (correctly) implied that "rocket" DOES need a definition. This definition, be it operational or imperfect or not, will allow people (or machines) to both identify rockets and have some idea of what rockets do. Stop overlooking that simple truth. If you're using a Type system for programmatic type safety, that definition will need to be formal. Outside of the domain of programming computers, I'll grant that the definition can be less formal.
- The model may have to be formal but the definition does not need to be. We could make a formal model of "country music" that is defined via an algorithm, but that does not mean the definition of country music itself is formal. The thresholds in our detection algorithm may be somewhat arbitrary. (One could call the algorithm criteria/rules a "working definition".) As far as "some idea of what X's do", you had a valid point and I adjusted the definition to correct it. I clarified that types (flags) affect the interpretation or usage of values. There must be a relationship between the flag and value. However, the details of this relationship are lang/domain-specific. Types don't even have to be related to other types (such as type substitution). Substitutability is an optional feature, NOT the essence of types.
- Actually, if you make a formal model of "country music", then you necessarily have a formal operational definition. And type-substitution is always correct when one is going from a more-specific type to a strictly more-general type (e.g. 'abstract-syntax-tree' to 'tree'). It is not an 'optional feature'... it's more a 'natural property' of types based on the fact that some definitions wholly encompass others.
- I already agreed with "working definition". We are looking for a common definition because any bloke can make a working definition for their own purposes. As described in QuestionsForTypeDefinitions, substitution is not necessary to have types.
- The 'modeling' half is the part described when people informally say: 'types are associated with the set of operations one can perform'. The 'identification' half is the part described when people focus upon 'type-structure as syntactic discipline'. Both of these aspects will always be united in a correct definition. And, (whereas I focus on the fact that you've already gone wrong on a fundamental, conceptual level), this is where the other speaker on the page sees problem with your type-definition and model: it does not provide any value at all with regards to modeling (what the side-flags do). On the identification-side 'TypesAreSideFlags' is easy enough: it has a side-flag, and that, therefore, MUST be the type. Only problem is that this identification violates the concept of type as accepted by the programming community at large: according to you, you can attach side-flag "rocket" to something that is clearly a "chair" by anyone's reckoning, and "rocket", therefore, MUST be the type, and everyone else is wrong.
- In fact, that seems to be your attitude at large. You thumb your nose at those who actually study the material they discuss, call them "Academic Elitists" or "lemmings", call yourself "the messenger", and pretend you're somehow a bigshot genius who can bring "Simplicity" to the world by renaming it. To criticize bluntly: TypesAreSideFlags is an simplistic, unusable, useless model whose current existence is wholly substantiated upon your ego. Anytime you see a hard question, you wave your hands and call it "implementation" details or invoke "TuringEquivalency" - the dance of the sophist. I would like for you to take a few hours, open your mind, let that ego vanish, emotionally detach from this model, and give it serious consideration as to what it is modeling and what, exactly, one is supposed to learn or derive from it in real use-cases - much like Newton's model for moving bodies (now known as Newton's "Laws"), weather models with wind and temperature input, mathematical models for trigonometry or calculus word-problems, etc. If you can't find anything, dismiss the notion that 'TypesAreSideFlags' is a model at all, then focus upon what definition implies and means by trying to apply it in weird, lateral manners and seeing if it breaks. Do this for any attempt at definition... it's a good idea.
- I already described why its a useful model and I stand by that. And, some even seemed to agree that it can be used to explain certain features of certain languages. Quote: "What I think you've been doing is successfully using the flag model to explain how type-related mechanisms in certain languages are implemented. You've done nothing to explain what types are." (Emph. added. I disagreed with the implementation statement; it's a mental model that may or may not reflect actual implementation.) I don't claim it a perfect model, I only claim it useful. In a (failed) attempt at mathematical perfection, others have created a Frankendefinition known as "type theory", which probably does not even qualify as a definition, but is rather a technique to implement and check type systems.
- I've already described why you're in error on your claim. And you've done nothing at all to convince me otherwise. You're also wrong about your immediate claims. Flag model doesn't explain "certain features of certain languages", and the quote you've grabbed doesn't support you on this. It's completely irrational to dismiss the "how [...] are implemented" just because you disagree with it. Side-flags are useful as one possible mechanism (among many) for implementing certain features of certain languages. But you'd still need to explain those features. Finally, 'flag model' isn't even usable as a 'mental model' for types themselves; at very best, it's slightly usable as a 'mental model' for the processing of types, though it is hardly correct on the theoretical level because it DOES focus on the mechanics common to some systems rather than the provably necessary computation.
- Please elaborate on "provably necessary computation".
- Any application of type-theory requires computation. Some of that computation is necessary, in the sense that it can be proven that not performing the computation is a sufficient condition for having not fulfilled the application. E.g. in application of TypeSafety, it can be proven that the prover must be capable of computing a contradiction between specified type and actual type wherever these contradictions exist. The mechanics of this computation are relevant to its implementation, but irrelevant to both the theory and the desired TypeSafety property. For example, the prover might or might not choose to cache calculated types in a cell alongside variables; it could alternatively, for example, perform what you're calling "validation but not types" each and every time, and still both get the job done.
- But "the job" may not be about types. And TypeSafety should not be prerequisite of the definition of "types" anymore than stability should be part of the definition of rockets.
- TypeSafety isn't a prerequisite for the definition of types; it's a property of a program that performs no undefined operations. Perhaps you're still confused as to how they are related. Types, themselves, by their own nature, have the quality of being correctly or incorrectly applied. For example, every rocket is an object, but there are certain criterion an object must meet to be called a 'rocket' (even though being stable and being able to reach space are not among these criterion). Frogs and chairs and evolution and bicycle-races and marathons of Criminal Intent aren't rockets because they don't meet the criterion (or even get anywhere 'near' meeting it in any fuzzy sense) that determine 'rocket-hood'. Flags and names and indicators have nothing to do with it; even a Rocket Frog is not a rocket (though a FROG-7 is). This is true for every type. For example, by nature of types alone, a child is simply wrong if holding Trevor the toad and calling him a rocket. However, that doesn't mean the child can't express an intent to, say, launch Trevor. TypeSafety gets involved in preventing the attempt. I.e. whereas Types admit to the possibility of you being wrong, they don't at all protect you from being wrong. That is the job of type safety.
- Now, the mechanics of determining that 'launch(Trevor)' isn't defined is an implementation detail, and deciding what to do once it is determined that 'launch(Trevor)' doesn't make sense is a language-design issue. One resolution, for example, is to halt a compile and report an error to the programmer (a rather sane decision that does the job of preventing the attempt). Alternatively, one could attempt to strap Trevor to a rocket, and launch that... perhaps transforming the intent to 'launch( strap-to-rocket( Trevor, obtain-rocket( new plan( suggest look-nearby, suggest look-in-home, suggest buy-from-market))))'. And, again alternatively, one could focus on the other word 'launch' and try to change that... perhaps change it to 'throw-like-baseball(Trevor)'. For purpose of TypeSafety, it isn't relevant how these issues are resolved so long as they are; the 'TypeSafety' property is achieved the moment there are no undefined operations... even if the solution completely irritates the programmers.
- You are talking about general classification systems and business rules, not so much "types". Types tend to be local and situational. When people say, "I don't like that type of person", they are usually focusing on a specific trait, such as being boisterous, or whatnot. A classification system may be used in say a grocery store management system to determine things such as perishables that must be sold by their expiration date by law. Classifications and business rules like, "cancel sell if this item is a member of Perishable and a member of Regulated and past its expiration date" are generally *not* considered "types". Or, "issue warning if kid tries to launch an animal (a thing that is a member of Animal)". That is a combination of classification databases, business rules, and validation. You are painting with a brush too wide to indicate anything useful.
- You keep insisting that classifications and types must be fundamentally different, but they aren't. If one wanted to write a type-safety system that systematically recognized such things as expiration-dates and regulations, and guaranteed that they were handled in certain ways, then one may do so. The business rules, however, are not types; the types are seen at the decision points: boisterous, perishable, expired, unexpired, regulated, unregulated, animal, kid, etc., but types are not the resulting decisions (boisterous type of person => I don't like)... at least not until the moment you make a decision based on a previous decision (type of person I don't like => I glare). There is nothing at all preventing one object or one value from fulfilling a great many different types (boisterous, tall, old). And, despite the fact that you think the 'brush is too broad', that my definition of 'types' simply covers 'too much' to be useful, I assure you that it does not. There's plenty of fine detail to see once you start looking inside the domain of types and start thinking about types-of-types: classifying types themselves based on usefully distinguished properties.
- So, while it is the case that types are tied to quality by nature (rightness and wrongness), there are many areas of quality that are wholly independent of types. One example is the quality of the mechanism used to determine whether types are right or wrong - a system with false positives or false negatives or that sometimes can't make a decision is of worse theoretical quality than one that always makes the correct decision, and one that is fast, small, stable, and simple has better mechanical quality than one that is slow, bloated, and prone to crashing. Another example of quality independent of types is the quality of the language design and resolution mechanism used when a type-safety violation is found. This independence is why I don't much respond to your notes that some systems truncate reals to ints and such... those are not the qualities that determine or define 'types', or even type-systems.
[The "side flag" theory or model or whatever is completely unconvincing to a non-expert in the field, too. If it helps the author of "side flags" to
personally understand some aspect of type systems, that's all well and good, but don't confuse that with a theory or a model. "Side flags" are to type theory what bushels of apples (two apples plus two apples are one, two, three... FOUR apples!) are to number theory. (Please do
not follow this up with a N
umbersAreApples page.)]
- You did a survey? Can I inspect the papers? And you keep confusing "definition" and "theory". Further, types were readily used long before the so called "type theory". You give that toilet roll too damned much credit.
- Is a toilet roll the only paper you inspected when studying type theory? No wonder your understanding of the subject is so... shitty.
Those in glass houses should not throw rocks. "Guaranteeing proper type substitution" is *implementation* concern. Types don't have to be bulletproof anymore than a "rocket" has to reach space to be called a "rocket", as described above.
[What glass house? I don't have to be a type theory expert to know when a so-called "theory" is utterly bogus, and I certainly know enough about the subject to get my work done. Now then: How, pray tell, does the phrase "values have side flags" - which is the sum and total of your theory - guarantee proper type substitution? What do you do with the "side flags"? Pass them through a Side Flag Processor that works like this: Input -> *magic happens here* -> Output?]
Again, you are flat wrong. Values are NOT required to have side-flags. As far as "guarantee proper type substitution", as already stated with the rocket analogy, quality control is NOT the purpose of most definitions. A bad type system is a still a type system. You've been brainwashed by MisuseOfMath to mix QC in with it. If a poorly-designed language truncates a Real to make an Integer when parameter passing, it may be bad practice, but it is still a type system. I've asked for justification for mixing QC into the def, and I get shit from you guys. Your argument-from-authority is getting tiring. I find it strange and frustrating that one would militantly hold such an unconventional and unjustified position.
You're blind. Over and over and over you keep implying that quality is intrinsic to the nature of types, but you don't even notice yourself doing it. Open your eyes. Tell me: what properties must a thing possess to correctly be called a "rocket", given that reaching space is not among them? Are 'Real's always things that a language can 'truncate' to make an 'Integer'? What properties must something have to correctly be called an "Integer"? If a type-system applies the side-flag "Real" (which is intended to mean the math Real type) to a Relation-variable, is the type-system "wrong" or "bad"? These are all qualitative questions.
Re: "Are 'Real's always things that a language can 'truncate' to make an 'Integer'?" - Again, that's a language or domain-specific question. Those issues are below the level of a definition of types. This should be clear to any rational person. (I know common tools that do truncate, by the way.) Rocket Stabilization Theory is NOT a definition of rockets.
Sure, the issue of 'Real' in particular is a language-specific question. However, the meta-issue is NOT below the level of a definition of types. Any type in any type-system will itself be characterized by a definition. This definition will identify instances of that type by its properties and allows modeling of instances of that type. A definition of 'types' needs to allow one to both identify and model instances of types, e.g. things like 'Real' and 'Integer' are instances of 'types'. Since every type in a type-system will always come with a definition that allows one to both identify and know what one can do with an instance of that type, this common property should be implied by any definition of 'types'. (Oh, and stop putting up straw-men. I've said nothing that implies rocket stabilization theory is part of the definition of rockets. To the contrary, I'd expect you have a definition of 'rocket' before you can even tell me what you're trying to stabilize.)
Exactly, and "type theory" fails to provide that, focusing on stabilization first. You keep accusing flag model of looking like implementation, but type theory is doing just that. Types are NOT about formal and provably correct classification/categorization systems. It may be a good model to *make* clean type systems, but it ignores the whole space of possible dirty but usable type systems that most IT professionals would agree are type systems even if they have rough spots. It focuses on classification techniques and backs into types instead of starting with the essence of types themselves.
Saying that 'type theory' is about stabilization is like saying that 'relational' is about storing business information. Type safety is one application of type theory - not the whole, conceptual "focus" of it, but a rather profitable use for it. Type theory focuses on learning what can be learned about types; each individual type-theory only starts by providing an operational definition for 'type' (the most common being representation as sets, categories, or predicates). From there, it moves forward into studying how types can be applied, properties of types with certain properties, etc. A type-theory is a theory in the mathematics sense. There are type-theories that are more based in natural language and linguistics and all the fuzzy stuff that comes with that... but, beyond use in AIs, you just won't see much of these in the domain of modern computer programming because these 'fuzzy' type-theories have poor properties for computation - it's hard enough to perform fuzzy calculations; when you still can't be sure of anything even after performing them, it's even harder to justify having done them.
And while types aren't required to be formal, they ARE about 'correct' classification. No matter what you try, top, you cannot name a single type that is NOT about a classification of objects or behaviors or values or phenomenon.
Since "classification" is just shy of TuringComplete, it can be just about anything you want it to be, like a Star Trek monster that reads minds and shape-shifts to fit your thoughts. Plus, it begs the messy question, "what is classification"?
Stop pretending you have knowledge that you so very obviously lack. General problems in types and classification are not "just shy of TuringComplete". These problems are fully TuringComplete (and undecidable). Some programming language designers often go to great effort to ensure that the type-inference for their language is among the set of decidable ones (since it's nice to know your type checker won't start looping forever), but there are undecidable, TuringComplete classification systems in common practice even today (e.g. the C++ templates system).
Which are implementation details and quality issues, which should not be part of the definition. (Why do I need to repeat this? You should have anticipated this complaint.) You have even strengthened my point if TT is TC because it means it can be everything and anything and thus a useless tautology definition.
You feel the need to repeat falsehoods because it boosts your ego to make a stand on hot air alone. Being 'TuringComplete' is not part of the definition of classification problems any more than being 'NP-Complete' is part of the definition for the Traveling Salesman problem. These are derived truths. And you still have an effin' shitty understanding of computation and expression if you think simply being TuringComplete "means it can be anything and everything". That must derive from your foolish hand-waving notion that TuringEquivalency is all there is. There are many classes of problems that are fully TuringComplete, in the sense that the computation to solve them has a correspondence/translation one-to-one with Turing Machines, but that doesn't mean the problem-classes "are tautologies"; any given problem certainly is not. Go drown your naive, simplistic views in a TuringTarpit. If you wish for me to 'anticipate your complaints', why don't you start by doing it yourself? Start explaining those statements you KNOW I'll reject BEFORE they are challenged, you hypocritical bastard.
Re: [I'm the one who wrote "What I think you've been doing is successfully using the flag model to explain how type-related mechanisms in certain languages are implemented. You've done nothing to explain what types are." What I meant is that you've successfully duplicated a common mechanism used to implement systems for clearly identifying types, but let me be perfectly clear: I do not mean that you've explained types, or even that it is a useful model. It isn't. I merely recognise that it may be useful to you, personally, as an aid to comprehension - much like thinking of quantities of apples may be useful to a child learning addition - but there is absolutely and categorically no aspect of the "model", and I use the term loosely, that serves any useful purpose whatsoever among actual software developers. It is rubbish, and the sooner you recognise the rubbish for what it is, the better. No one benefits from your arrogant, ignorant, unqualified, unconsidered, ego-centric, nonsensical spew, that is at best embarrassing, and at worst a waste of bytes on Ward's hard drive. Anyone with even an ounce of understanding of computer science can see that. Unfortunately, I don't think you have even an ounce of understanding of computer science; hence you cannot recognise the sheer folly of your "contribution" here.]
So you agree it may be useful to me, but not anybody else? Am I the only developer out of millions that thinks in such a way that the model may be useful to me but ONLY me? You are make huge assumptions about IT psychology. (I've already described why it is not an implementation model.) It is not a perfect model, but the alternatives suck far worse as far as usability to real-world practitioners.
He said it may be useful to you as a comprehension aide, NOT as a model. Even that was very clearly written, above. How could you not see it? Are you really so damn arrogant that you think you can just read a small fraction of a statement and wholly understand it? You're making some huge assumptions about your own psychology. However, there is (as stated above) "absolutely, and categorically, no aspect of the 'model' (and he uses the term loosely) that serves any useful purpose whatsoever among actual software developers", including YOU.
Let's return to the practical scenario mentioned before: the description of the differences in treatment of scalars between ColdFusion and PHP. I can explain the difference using the flag model by drawing boxes on a blackboard showing when and where the boxes get filled in and how they affect the various error messages etc. that may arise. I don't need goofy foreign notation nor a thesis on classification systems, just boxes and arrows. Can you do the same without sounding like a befuddled quantum physicist?...
- [The rules by which you combine boxes and arrows, if articulated, could constitute a theory. To do so formally would suggest the use of a formal notation, or its equivalent. Done with appropriate rigour, you would almost undoubtedly wind up with a type theory that has all those nasty "math" characteristics you so casually decry!]
Sure. But I'd make it quite clear that I'm describing a particular implementation of the type-system as used for the purpose of type-based dispatch. And, unlike you, I'd actually know what I was talking about.
And would remain the only one after your presentation.
Or at least I wouldn't be inappropriately throwing about words like "model" and "theory" like a befuddled layman.
There can be a lot of overlap between "model" and "theory". String Theory is also a model.
[String theory has structure - the definitions, processes and theorems by which it may be used to make predictions about the real world, and by which it can (eventually, it is hoped) be objectively tested against the real world. Your "theory" contains no structure, hence any predictive value derives from your personal, subjective interpretation of its single element, a terminology equivalence of "type" = "side flag". There is, in particular, no content sufficient to automate deductions, or to perform objective reasoning about the "model."]
Wrong again, as usual. One can do experiments to see the difference between a type flag affecting something and not affecting something. For example, both print(a) and print(b) may produce the same output, but print(a + 1) and print(b + 1) may not. Flag theory can explain why this is. (It may not be the only way to explain it, but it is one of the simpler ones.)
[Really? I don't recall reading anything you wrote to the effect that f(v1) = f(v2) but f(g(v1)) != f(g(v2)), given (v1, t1), (v2, t2) where t1 != t2, etc., or some such. If you believe "flag theory" explains this, then the theorem must exist in your mind, because it didn't make it to this page until you wrote the informal description above. Do you expect users to apply your theory via imagination, or clairvoyance? What, in your "theory", guides the construction of experiments? Or is it exception-based, in which you only reveal informal tid-bits in response to comments that you consider incorrect? So far, that seems to be how it works.]
- Theorems based on a model are not necessarily part of the model itself. If we want to keep the model as small as possible, then we don't need to include items that can be derived from the model itself (other than illustration purposes). Further, my goal is to explain the behavior of languages for "regular IT joes", I have no intention of targeting academics or producing a math.
- [It is true that you needn't include items that can be derived from the model itself, but your model has no content beyond (at best) a single definition. It's like building a model airplane that consists of a single balsawood stick, and claiming that we can use it to study aerodynamics. Anything derived from your "model" is actually being produced from a set of axioms, theorems and definitions that are in your own mind rather than in the stated model. Another mind may derive other results from your single definition that may not be consistent with your mental model. I could, for example, derive from your definition that all types are the same type. I have no quibble with the notion of producing a simplified type theory for average IT joes, but a mere change in terminology ain't it. Your model reminds me a bit of a misinterpretation of using XML I once saw - an individual created an XML file on his computer to represent his CD catalogue, and expected that to magically turn into a Web site, without any Web server or other machinery present to interpret the XML file. Your model is the same - it's hollow and content-free, and simply can't work without extensive pre-existing mental machinery to interpret the two terms - "types" and "side flags" - in a useful manner.]
- But what you ask for is *implementation*. If you want to test the properties of rockets, you have to build one, and that requires selecting a *specific* implementation, which almost surely makes it more narrow than the definition. If one analyzes all things called "rockets", they find two things in common: a uni-directional chamber (opening on only one side) and some kind of expanding substance that exerts pressure. When people make dictionaries, that is what they do: find the commonalities in usage instances. That is exactly what I did when formulating flag-theory. It don't claim it perfect, but it is the most parsimonious I can devise. I.E. the "best fit". "Type Theory" didn't do that. Instead, they made a TuringComplete Rube Goldberg contraption based on the notions of classifications and substitution. If one defines/interprets a given language/device *in terms of* TT, with enough work it can be made to fit. But it is a TautologyMachine, similar to what StringTheory has been accused of: so complex and wide that it can be bent to fit just about any observation made. (That is not necessarily a problem from a predictability standpoint alone, but it may unnecessarily complex.)
- [There are two ways to test the properties of rockets. The first way is to you build one and see how it behaves. The second way is to apply an abstract, tested model that mathematically describes the properties of a rocket. See, for example, http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=AD0286069 You plug in rocket parameters, and out pops values that represent rocket properties. As regards your comments on type theory, you seem to be claiming that you found it too complex to appreciate, so it was appropriate to develop a substitute that is so simple it's useless. Do you not think there might be some worthy middle ground to be explored? A simple model with some actual, uh, model in it could be a useful thing.]
- A useless simple model is better than a useless complex one :-) Anyhow, I addressed this issue below. The rocket link is a model of *a* rocket, not ALL rockets. You are stretching the scope. I am beginning to think you don't grasp the concept of "definition".
[Your cognitive processes in applying the definition might, if articulated and expanded appropriately, constitute a theory or a model or both. However, as it stands on the page, it constitutes nothing but an empty substitution of terms. By way of analogy, it is equivalent to a "model" consisting entirely of a single definition: "spaghetti" = "pasta". It tells us nothing about spaghetti itself, and says (for example) nothing about the significant properties of said noodles when subject to moisture, heat, and tasty sauces. These things need to reside in your mind beforehand in order to derive any use from the "model." Similarly, in order to be used, "type" = "side flag" requires a mental model of the behaviour and application of side flags because they are not described by a substitution of terms. In other words, the "model" is left entirely to the reader's imagination. If there is to be a proper model there, at the very least it needs to be explicit rather than implicit.]
So the definition of "pasta" should include noodle texture? You're rambling.
[No, but the definition of "pasta" must include *something* in order to be a model or theory. Otherwise, it's just a substitution of terms, something that can be done with a thesaurus. At present, "spaghetti theory" includes nothing at all, just like "type" = "side flag" includes nothing at all. At the very least, it must describe what to do with these "side flags." You can't rely on the reader's psychic abilities in order to divine what you have in mind.]
I disagree with that assessment. It can be used to explain things such as print difference scenario above and the difference in behavior between CF and PHP scalars, as already described. You couldn't do that without a goddam BookStop because "type theory" is a Rube Goldberg model. It would take you at least 10 times the quantity of pages to explain those behaviors to a regular IT joe. If you disagree, then let's test it. With "type theory", Parsimony is murdered, put into a blender, and dumped into the deepest section of the ocean during a hurricane. At best it is a UselessTruth. At worse it is not even a definition.
[What is being used to explain the "print difference scenario" is your MentalModel, not the "model" written here. Your "model" as it is written has no content that can successfully predict the results of the "print difference scenario." A sound model should be automatable, such that I can encode the "print difference scenario" at the input end and out pops the result from the output. If this process requires human intervention, and especially additional interpretation and prior high-level knowledge, then the model is incomplete.]
To get an executable model, one would have to build an interpreter/compiler model (like drafted in RunTimeEngineSchema).
[Not necessarily - by "automatable" I mean that it should be possible to automate without requiring additional special knowledge or invention. That doesn't necessarily mean there's a reason, or that it's reasonable, to automate it. So far, your model is based entirely on special knowledge and/or invention, as it relies wholly on an implicit MentalModel of what is meant by the term "side flag." To be a good model, you need to turn the MentalModel into something external and explicit.]
See above about definitions not usually being specific enough to implement.
[If your definitions aren't specific enough to implement in your model, then you need to refine your model.]
RunTimeEngineSchema has identifiable values and identifiable side flags (columns holding values and types). As far as an algorithm to detect such, has anybody ever made an algorithm to detect *non-disputed* features such as "variables" or "subroutines"? If it cannot be done for those two, then asking me to do it for "types" is unrealistic. Type Theory cannot do it either without lots of hand-holding that requires human interpretation, which is essentially a translation of an algorithm from one TC language to another (TT being a TautologyMachine).
[I'm afraid the above is unclear. I do not follow. What, for example, is a "TC language", and what does "(TT being a TautologyMachine)" mean? Is that a fancy way of saying "I don't understand type theory"?]
TT = Type Theory
TC = Turing Complete
POS = piece of [bleep]
[Actually, never mind. This is an increasingly pointless debate. I don't know whether you're desperately hanging onto your odd notions because you're trying to avoid an ego-beating, or because you genuinely don't recognise the degree to which you're spouting shite. I don't suppose it matters, because either one has reached, for me, the point of being a little bit... Scary. I've got better things to do. I wish you did, too. In particular, I'd encourage you to spend some time learning about computer science, mathematics, and type theory. You'd discover a whole new world of interesting topics, and you could genuinely contribute to the field - perhaps even create a simplified type theory, should that be your goal - instead of merely engaging in endless quibbling over nonsense.]
I suggest you try to teach a newbie about types in common languages and try to find a concrete way to describe what is going on and why they behave as they do. Then you'd realize what a POS TT is.
[I'm a university lecturer. I teach programming, in common languages, to newbies on a daily basis. Many beginners develop an informal notion of types immediately and intuitively. Others benefit from analogy and examples from programming languages and real life, and soon understand. Certain weak students never grasp types or programming of any kind. They frequently find success in concrete, non-abstract subjects, like history, geology, or geography. Some people have unusual difficulty with abstractions of any kind. A few of these become excellent programmers, but at an entirely concrete level.[1] I suspect you may be one of them.]
--
[1] I sometimes suspect they are the sole reason COBOL vendors remain in business. :)
I explained why I avoided excess psychological abstractions above under "The Scope of the Definition". The abstract/philosophical issues of "types" may be important, but I stated I feel they do not belong in the definition, regardless of importance. Substitutability of types may be a useful tool, but that is a "purpose" or "techniques" issue, not a description of what types are. And, finally, you can take your insults and shove them where the sun don't shine, you arrogant b8stards. You didn't explain how you would explain those type issues to them; instead you insulted me. Perhaps you are not capable of such and are only hot air. Perhaps your students who don't seem to "get types" are having problems because your lecture sucks because your model sucks. Perhaps try the flag model on those that are slow. The flag model can be used to lay out the steps of what happens when these type-related oddities are occurring. One can number the steps and show stuff being put into and taken out of hypothetical boxes (variable slots and flags). It answers questions and one can then say, "Oh, so that's why the language does such and such".
[I'm not being insulting, I'm describing a characteristic of certain programmers. (A characteristic by which, OMG!!!, they can start to be classified into types!) You'll note I used the phrase "excellent programmers". If you find that insulting, perhaps you might cogitate on why you find it insulting. As for the claim that "the flag model can be used to lay out the steps of what happens when ... type-related oddities are occurring," I must ask what steps? That's what I've been asking you for all along. It would go a long way toward turning your term-substitution into the model you claim it is.]
Re: "The abstract/philosophical issues of "types" may be important, but I stated I feel they do not belong in the definition, regardless of importance." - And you're wrong, regardless of your obstinacy.
Re: "You didn't explain how you would explain those type issues to them; instead you insulted me" You speak of type-system implementation-issues as though they're generic 'type issues', you loudly proclaim that the real type issues are 'implementation-issues', and you expect us to bow to your pretended understanding of the field. Of course you deserve some insult. If I were to explain some mere implementation issue, such as how types are processed in ColdFusion vs. PHP and why they react differently when utilized within expressions, I'd do so directly. But, unlike you, I wouldn't pretend that working with types is easy... the idea of types (which IS founded in the 'abstract/philosophical' issues of classification) is as simple as the idea of water, but the broader field of type-theory is as deep as the ocean. Most useful mathematical and logical theories are that way: simple but deep, and taking more than a little effort to fathom.
You want to know what I'd do if teaching the idea of types to newbies? I'd start with the shallow waters: simple types like integers and strings. Then I'd describe application: use with type-safety in variables, and type-ascription in ML. Then I'd carefully move towards the deeper waters and teach them to swim - simple generic types of one variable, generic types of two type-variables, draw back and toe the waters of dependent types and constraint-types, explore a bit deeper on generics and study meta-types, or shift gears and start looking at how types are utilized in the lexing and parsing and process, or focus on implementation and actually performing type-safety checks. I don't for a second that one could get six full University classes out of type-theory alone, with more than half of them being masters-level. Of course, one would need to sell the classes, which means exploring them while also exploring something like AI, compilers, automated theorem proving, etc.
But for a smug prick like you, who thinks he knows all he needs to know? who thinks he knows enough to create his own silly little 'model' and teach others? Well, I'd probably try to shove someone like that directly into the deep water and laugh as he flails and drowns (screaming all the while: "this isn't types! this isn't types! I saw types from the shore!"), then, when I had my fun and felt he was suitably chastised, I'd teach him a bit and point him towards the real knowledge. You know, like an initiation or something.
Unfortunately for that tactic, I seriously underestimated your ability to support yourself by inflating your ego with your own hot air. You haven't worked with types... you've hardly even gotten your toes wet. You, quite frankly, will never learn anything significant about types - not unless life forces it on you (or that ego of yours deflates and you decide be a student of life). You won't learn anything as a HostileStudent. Until you change (if ever), you'll let your own agendas interfere with your comprehension... e.g. you're desire to bring K.I.S.S. to the people who, like you, wish to pretend they understand.
You are just as afraid that your hard-gained knowledge has been rendered a bloated wasted dinosaur by a practical K.I.S.S. competitor. The vacuum Tube has been replaced by the transistor, and vac makers are trying to justify themselves via patronization. KISS to the Masses! Make the yapping ivory-heads earn their keep. -- top
No, actually I'm "just afraid" you'll actually succeed at teaching your crap to other people. You're a charlatan and a salesman of snake-oil who displays no sense of professional ethics whatsoever, and your attempt to 'teach' this is more to boost your own ego and thumb your nose at the establishment than it is to provide anything of value. If you actually desired your spew to be useful, you'd actually do what [Mr.E. Brackets] asks and complete the model to the point one can (a) systematically plug in type-systems and expressions and (b) derive real conclusions. Further, you'd reject Keep It Stupid for the Simpletons stuff like 'TypesAreSideFlags', because, quite frankly, there is NOTHING in your model that provides for What types ARE; the closest you come is leveraging another 'real' parser and 'real' type-system (with their own operational definitions of 'type') as *implementation details* to give you the 'flags' - about which your model currently makes NO conclusions whatsoever. Clean up your act, top. Get yourself a real education in the subject before you try to simplify and teach it. And deflate that ego of yours. It's not doing you or anyone else any favors. Your excitement at the irritation of those with expert and [moderate to advanced] knowledge of the subject is just one more pound of proof that your motives are entirely impure.
I feel the same about you, and "type theory" is meandering garbage that nobody cares about or ever will except a few self-elected academics who like to play with pasta on paper. As bad as you claim flag theory is, TT is worse.
I challenge you to try to explain the print example to a newbie using a chalkboard. The only ways I think it being done will involve a type flag *somewhere*. You can't escape it. Any concrete or mechanical illustration of what is going on will have a fricken flag in it. You can say that types are substitutions or classifications, but you cannot draw those; they are too hazy a mental notion. "Type theory" fails the chalkboard test (barring the use of 2,000 chalkboards).
In explaining the implementation details of PHP's function-dispatching and operation, I'd be sure to describe that there are bits in the value representation (not the variable) dedicated to indicating type and describing how the remaining bits in that value representation are to be interpreted. Then I'd describe how CF operates with its one-representation-fits-all approach and even looser typing (by language design) and its lack of type-based dispatching. What I wouldn't do is start going on about these indicator-bits as though they're proof of some great, useful truth about 'types' as a whole. These are, after all, implementation details, and thus should be well below the level of any definition of 'type'.
- Again, one does not have to use type flags to implement such. I have no idea what CF is actually doing inside. I am only saying the flag model can be used to explain what is going on. Perhaps it can be argued that a model is not necessarily a definition, but first we'd have to agree that it is a model.
- Technically, TypesAreSideFlags is a 'model' with just a few axioms that (to be made formal) requires definition of "cell" and "variable" and "value" and "side-flag". This theory states axiomatically that a variable is constructed of two cells, one of which stores "side flags" and one of which stores a "value". It then states that "something" (<- magic happens here) is done with the side-flags - supposedly some sort of dispatching or decision. There are currently no constraints at all between side-flags and value, meaning that it's impossible to apply TypesAreSideFlags to the purpose of TypeSafety. (Supposedly this is a 'feature'.) The types are attached to the variable rather than the value, which means that expressions of any sort are not typable. (This was ignored with a bit of hand-waving in the discussion on inference of types). For a model to be useful, it needs rules that can take inputs and make predictions. For TypesAreSideFlags to be useful, it needs the ability to make meaningful predictions given a system of variables, values, side-flags, and expressions. At the moment, the "flag model" cannot currently do any explaining; it's currently a "useless" model... this is the primary problem that [Mr.E. Brackets] has with it. (My own 'primary problem' is the one of equivocation - the complete lack of any connection between "side flags" and how everyone else in the world uses the word "type".) Anyhow, despite the fact that it isn't my biggest qualm, unless you can start giving examples of the proof (the list of rules applications) by which 'flag model' both predicts 'result-XYZ' and rejects 'not-result-XYZ' (contradictory predictions being a big no-no for a model), I'm not particularly inclined to believe you that flag model is actually capable of it.
And as far as your informal 'chalkboard test': I don't believe I'd have too much difficulty describing the basic idea of types
on just one board, at least before I started providing multitudes of examples. Most people won't have difficulty with the notion that 'types' are whatever properties they're looking at when they classify different things into different buckets, and that the same thing is true for computers (though computers are a whole lot more objective about it).
- "Types are Buckets". Hmmm, another potential model. Except most people don't envision them moving based on their kind. A tag that says which bucket(s) you belong to is closer to how most people think about them.
- Heh. Most people don't envision what moving based on their 'kind', pray tell? People can envision values moving around, not types... does the idea that something's an 'Integer' come and go? or is it an integer whether you know it or not?
They'd have a bit more trouble applying typing to behaviors and phenomenon than to objects, but that can wait until a later date. As far as Substitution and classification: these are hazy to you largely because you're a HostileStudent with your own agenda and refuse to comprehend (or at least acknowledge comprehension of) these ideas.
- Above you agreed that substitution is NOT necessary for types. Types can be USED for substitution, but that does not make them ABOUT substitution. Why do I have to keep debunking this claim? How many times do I have to shoot a dead horse? As far as classification, I'll have more to say about that next week.
- I agreed that they aren't necessary for types in every type-system. Of course, if something is of two types because one is more specific than the other, that's not really type-substitution, is it? All dead horses are dead animals, for example. Anyhow, read what you're replying to a bit more carefully: I'm not making a claim that substitution is necessary for types. I just disagree that these concepts are hazy... the fact that they can, in many domains, be systematically performed by a computer, is plenty proof that the underlying concepts of substitution and classification are not hazy, not vague in any way, even though some problems of classification (that is: instances of choosing between one thing and another) do happen to be vague (e.g. chair vs. couch, bush vs. tree), and some problems of substitution happen to be ambiguous (e.g. if you ride horses and bury dead things, what should you do with a dead horse? ride it? bury it? shoot it? - this would be a substitution problem in type-based dispatch of operations)
- Re: "the fact that they can, in many domains, be systematically performed by a computer, is plenty proof that the underlying concepts of substitution and classification are not hazy" - Not true. See the "country music identifier" example above. (see, I do have to repeat myself.)
- Fw: "some problems of classification (that is: instances of choosing between one thing and another) do happen to be vague (e.g. chair vs. couch, bush vs. tree)" - ADD: "country music" vs. "bluegrass". (See: you really don't concern yourself with comprehension before parroting yourself.)
- Why did you use the phrase "plenty proof" if there is a huge hitch (and the hitch not ruled out)?
- There is no hitch. There is "plenty proof" for exactly what I was proving: that the underlying concept of classification is not hazy. Even a computer can do it. Computers have a very easy time classifying a great number of things (sometimes even correctly), and it is completely and unambiguously clear that the computer is doing so; therefore, it cannot be the idea of classifying that is the problem. This is very straightforward logic. When attempting to handle such things as "chair vs. couch", it's the ideas of "chair" and "couch" that cause problems; THOSE are vague. "Country music" is vague. But classifying things into vague 'classes' (where you cannot be completely sure your decision is the correct one) is NOT the same as 'vaguely classifying' things (which implies that you can't be completely sure you've actually made or stated a decision).
- You don't seem to be understanding my complaint. A model of an idea (such as a country music identifier) can indeed have precise criteria. But this is about definitions, not models. A precise model of a fuzzy definition does NOT (by itself) turn the definition clear, period! A good test of this is to have different people create a qualifier machine. Mostly likely, the answers will not always be the same when dealing with the borders of fuzzy definitions. Some of John Mellencamf's songs could be classified as either country or rock (or both), and there will be people who disagree with the results. (A continuous scale may be more appropriate for such things, but complicate definition issues even more.) If you are talking about the ACT of classification being precisely and unambiguously detectable, then I challenge you to build a classification action detector algorithm.
- Your complaint isn't relevant. In discussing whether the concept of 'classification' is clear or unclear, it simply doesn't matter that certain other concepts, like 'country music', do happen to be unclear. The idea of 'classification' does not depend upon the idea of 'country music'. Saying otherwise is like saying the fundamental idea of 'integer' is unclear BECAUSE the 'count of unique sounds' is rather vague. It simply doesn't apply.
- Regarding your request: you show me how to build a 'detector algorithm' for something specific (e.g. detecting a human's recognition of 'p' from 'q'), so I have some idea of what the hell a 'detector algorithm' is supposed to be and exactly how omniscient the 'sensors' on this detector are (i.e. can you rely upon open and honest communication? or do you need to read minds?). Computation systems are black boxes that receive and emit signals. I have a feeling you're demanding omniscience of the detectors just so I have to say "that's impossible". However, if you're willing to bypass the issues regarding the limits of perception, you can look at executable specification: any 'cond' or 'if' or 'switch' statement is associated with the classification of an evaluated expression - an item that is very clearly and unambiguously classified when the specification is finally executed. It is this that makes it completely and unambiguously clear that computers are performing classification.
- It sounds like another case of TautologyMachine. But, may I request a restatement of your claim about "plenty proof"? There are side-issues that are gumming up the discussion and it may help to start on a clean slate with that one.
- Let's try a different approach. Under the normal dictionary and philosophical accepted meaning of the word 'vague' (to which 'hazy' most closely corresponds), concepts are 'vague' insofar as there exist borderline cases (where you aren't certain whether the case constitutes an instance of the concept). E.g. 'chair' becomes rather vague when dealing with classifying them against 'stools' and 'couches' and 'benches' because there are some benches that seem awfully darn close to chairs and you're not sure how to decide. However, suppose you say of one of these borderline cases "it's a chair". At this point, you've quite clearly (not vaguely, not hazily) performed the act of classifying the bench-chair-thing as a 'chair'. You might be wrong by everyone else's reckoning (the act of classification doesn't imply correct classification), but the act itself was completely clear and non-ambiguous. Suppose you say of one of these borderline cases: "it's either a chair or a bench". Now, if 'chair vs. bench' is the decision you're trying to make, you're very clearly 'undecided'. OTOH, if all you care is whether it's 'something designed to sit upon' vs. 'frog' vs. 'rocket', then you've fully classified it at an even greater resolution than you actually required. Now way, way above, you made a claim that 'classification' is a hazy concept. To provide evidence of that classification is vague, you must find cases where it is very difficult to tell from within the computation/classification system whether a classification decision has taken place - i.e. something that is disputably a decision, or something that is disputably a 'classification' decision. You've failed to provide this support for your claim; instead, you keep pointing at the irrelevant: that it's difficult to decide whether something is 'country music' vs. 'pop' vs. 'rock'. If you weren't being habitually fast and loose with your logic, you'd realize that this case only means that 'country music' and 'pop' and 'rock' must be somewhat vague, and says nothing of classification itself.
- If you recall, my claim was that "classification" is a kind of TautologyMachine (SUPPORT YOUR CLAIM). There may not be any borderline cases for a TautologyMachine, it bends to fit everything (and you admitted that "type theory" was TuringComplete). Perhaps "vague" is not quite the word I was seeking, but I don't have a better one right now. It's almost the Creationism of type definitions.
- To disprove that the idea that "classification fits everything", I only need one counter-example. So, here it is: objects are not classification. Classification is something one can do TO objects. I can even name computations that aren't classification: addition of integers isn't classification. Further, I can name something that is classification: pattern-matching is classification. If I have one thing that is, and one thing that isn't classification, that seems quite logically sufficient to prove that classification isn't a 'tautology' or a 'TautologyMachine'.
- Objects are indeed classifications. Objects are collections of methods and attributes. Any given object attribute belongs to the classifications "member-of-an-object" and "attribute-of-object-X" (among others). The object itself is a member of "oop-constructs". I'll deal with the other example after this is cooked.
- Nonsense. What I see is a rather illogical leap from "<X> belongs to the classification <Y>" to "<X> is a classification". I also see equivocation; classification (the noun) is not classification (the verb).
- We are getting into some sticky word-play here. OOP can be defined as a classification technique: things are classified as classes, objects, attributes, etc (fallacy: non-sequitur; details below). Any structure or classification system can be redefined as OOP constructs (NEEDS PROOF). Thus when doing OOP, one is doing classification: they are putting concepts into OOP "members". It may be members of objects or attributes or classes, etc. Objects are a bit low on the classification totem pole, so is probably not the best example.
- It is true that one performs the act of classification as part of performing the act of OOP, but it doesn't follow that OOP IS classification. On the non-sequitur: You state that OOP is a 'classification technique' or can be defined as one. I'm not inclined to believe you based on the support you offered. OOP does provide a small dictionary into which you may classify things ('attribute', 'method', 'class', 'object', etc.); however, OOP doesn't provide a technique for taking these definitions and applying them. Rather, it assumes we already have one. Further, if OOP was just a classification technique (i.e. if it were 'defined as one'), one wouldn't be getting any programming done: doing OOP (the classification-technique) would only be classifying things that are already there. Creating new stuff isn't classification. Anyhow, you're digressing a bit. Now you have two problems: proving OOP is classification (the verb) AND proving objects are classification (the verb). I only need one thing that is classification (the verb) and one thing that is not classification (the verb) to prove - logically and absolutely - that classification (the verb) is not a 'tautology'.
- Before I get deep into this (NobodyAgreesOnWhatOoIs). Can you prove it's not? If neither of us proves it either way, then it reinforces the suggestion that "classification" is a vague concept. If it's not a vague concept, then you should be able to formulaicly show that "object" is not a classification by either showing it lacks required traits, or has traits that disqualify it.
- You would be correct to argue that "object" is a classification(-the-noun); it is (by definition) the classification(-the-noun) to which any object-instance belongs. I'm not going to argue otherwise. However, this whole line of reasoning is digressive... it doesn't have anything to do with objects (object-instances) being classification(-the-verb). Heck, it doesn't even prove that "object" is "classification(-the-noun)" without that extremely significant qualifier "a". You've misquoted me; my original statement is that objects are not classification (by which I intended from context: object-instances are not classification-the-verb - "classification is something you do TO objects"). I picked this as a simple counter-example because it should be so extremely obvious for a wide variety of reasons (first among them: the difference between 'actions' and 'things'). Are you truly as confused as you're acting? or was this another attempt to use fallacious techniques (straw-man argument and equivocation, this time) to trap the unwary? (I'm beginning to get a whole new appreciation for the phrase: "spinning like a top"; it seems to have meaning no matter how I choose to interpret it.)
- Nouns and verbs are often interchangeable (NEEDS PROOF). Anyhow, you didn't answer the question. Shall I assume you agree that "classification" is a vague concept? It is not my job to provide clear criteria for "classification".
- Ah! that's even more brilliant. Ask a question, then be completely misleading with a bunch of straw-man arguments that mostly indicate you haven't a damn clue what you're talking about (e.g. the factually incorrect digression: "If it's not a vague concept, then you should be able to formulaicly show that 'object' is not a classification"), then point fingers when I'm properly distracted by your trap and fail to answer the question. You spin well, yes you do! Despite your last sentence there, it is, in fact, your job to describe those things about which you make claims unless you're assuming that you're talking about exactly the same thing I talk about when you say "classification". So, at this point, I'm free to assume you're talking about "classification" as understood by an expert in TypeTheory (me). I don't need to describe it to you... that would assume that you're the student and I'm the teacher, despite the fact that you're the one who is making the claim that "classification" is 'hazy' or a 'TautologyMachine'. From my current position, I need only make obvious the gaping holes in your fallacious logic... not make any claims of my own. I've grown tired of trying to educate you on anything at all. However, if you ask and promise (explicitly) to be a non-HostileStudent who is genuinely open to learning new things, I'll teach you what means 'classification' to an expert in TypeTheory in both less formal philosophy and in formal computational models.
- I simply reviewed the burden-of-evidence issue after getting deeper into your word-play and realized that it's not my burden, and thus that path of inquiry is not necessary. If your view of "classification" is a precise objective concept, then there should be an algorithm to determine what is "classification" and what is not. (<- NEEDS PROOF) Where's the goddam algorithm? If it takes 2,000 pages, it's most likely RubeGoldberg garbage.
- Funny. I reviewed the burden-of-evidence issue and came to the conclusion that you've not been explaining well more than half your claims, and you expect us to take you at your word when you're spouting what I consider utter nonsense. Consider, for example, your very recent unsubstantiated 'claim' that "if <X> is a precise objective concept, then there should be an algorithm to determine what is <X> and what is not". I could prove this is a falsehood (even one, measly counter-example, such as the decision in the Halting problem - which is very objectively defined (and not even vague) but lacks any known algorithm common to all computation systems - is entirely sufficient), but I don't think you'd listen. I could even leverage my own experience in the subject and suggest that an algorithm isn't relevant - that all that matters for precision and objectivity is the definition. I could even prove it. But, again, I don't think you'd listen. You'd call it "meandering." Doing all that extra work when you aren't listening seems a real waste of MY time. Thus, instead, I'll simply lay the burden on you to prove those claims you make that just happen to be falsehoods and, consequently, unprovable. This isn't even 'unfair'; it's perfectly reasonable of me to demand proof for what I consider to be falsehoods.
- "Halting problem" is an obscure exception. What is more likely, that "types" are also an obscure exception, or that you are dicking around? Out of politeness, I won't say which one I think it is right now. I think we are done here.
- Halting problem is simply one exception among many; I chose it mostly as one of the obvious exceptions (I debated between it and the whole, also obvious, 'something is in P' question - quite objective and precise. Once you have a proof, it's very clear that something is in P, but it's currently very difficult to prove something is not in P (is NP in P?) and there is no known algorithm to do so. Despite this, the meaning of being 'not in P' is extremely objective and precise.) There are many such exceptions, and many less obscure... but it only takes one exception to kill your sweeping claims. More relevantly, it doesn't even take that much when your claims have no support other than your word alone. I can kill those with MY word alone - a simple 'Needs Proof' is sufficient. I didn't NEED to mention the halting problem at all. Your arguments rely upon fallacy and unsubstantiated claims that have no accepted basis in reality, and when demanded, you support them with even more fallacy and even more unsubstantiated claims. I've already told you what I think of your argument form and you for using it; out of politeness, I won't repeat it here. Quite glad to see you go.
- Now, as to your own burden: You have yet to prove that your notion of 'TautologyMachine' has ANY merit WHATSOEVER. There is certainly plenty of challenge to the idea on the so-named page. Further, you should start providing substantive support - real arguments to support your "claims". Otherwise they're just so much hot air.
- Types are about classification, not the substitution. This is true to the point that classification and 'type' are, fundamentally, the same sort of thing (though 'classification' is more often used as a verb). I'll accept your challenge on this when you're ready to present it, though I'd prefer you move it to a clean page.
Most people understand quite intuitively that 'dogs' are 'animals' and that one can offer a dog whenever asked for an animal, and most people understand quite easily that some things are dogs, and that some things are not dogs, and that you shouldn't offer a 'not dog' when asked for a 'dog' (and vice versa). That summarily describes substitution, classification, and type-safety all in one small, chalkboard-sized, easily digested example. Even dealing with multiply-classified objects wouldn't be difficult; the majority of people spend their childhood reading about 'red dogs, brown dogs, thin dogs, round dogs'... they've plenty of experience to draw upon.
Of course, your informal 'chalkboard test' is largely pointless, anyway; it doesn't meaningfully distinguish between the useful and the useless theories. How many of these could you teach on fewer than even 100 chalkboards (counting erasing the chalkboard and rewriting): Computational complexity of algorithms, impedance in electronic systems, behaviorism, Kennedy conspiracies, etc? These are all quite specific areas of 'Math', 'Physics', 'Psychology', and 'History'... more equivalent to adding 'dependent types' to the list than 'TypeTheory' as a whole. If I were to judge based on sample sets of all theories I've heard of and managed to understand and find use for, I'd conclude: "any theory you can both describe AND explain to a newbie with just one chalkboard is probably useless." I agree that parsimony is a wonderful trait for a model to possess, but it's rather pointless to possess when you've traded out all the axiomatic rules for making conclusions in order to obtain it.
If I ever have an urge to challenge a definition again, slap me unconscious with a cubicle wall. It ain't worth it, regardless of whether I'm right or not. I'll stick to arguments of how to make better things instead of what to call them. One can't fix the world by renaming it.
If "types" are "side flags", what "type" is the function \x.x*(x-1)? Using Python notation, but not asking about Python specifically, what type is lambda(x:x*x)? I can see what the original might mean in the case of obvious built-in things like ints and strings, but what about functions?
- In a dynamic language, often one cannot tell a result "type" (if the language supports types) by examining the expression alone. There may be clues, depending on the language. For example, if the operator "*" can only be used on numbers (a language-specific feature), then we can assume that "x*x" would crash on anything but numbers. In a type-free (flag-free) language, the multiplication operator would parse the variable x at run-time and see if it can be interpreted as a number. This is considered "parsing" and not type-checking under the flag model. A flag based language would check the type flags to know if multiplication is valid. We cannot tell in a static language either without seeing how x is defined.
That doesn't answer the question of what one puts as the side-flags for that expression. Specifically, if we set
f=lambda(x:x*x), we have the variable
f. So what side-flags do you put against
f? It sounds to me like you claim
f doesn't have a type.
Again, It largely depends on the language. I would reckon that most dynamic languages defer the answer to run-time. The variable "f" would have an empty (or default to "object", "variant", etc.) type-flag associated with it (let's ignore type-free languages here), and the result of the evaluation would populate the type flag. In dynamically typed languages:
f = <expression>;
Is really:
f<value><flag> = <expression>
The language syntax just does not directly show the flag. Every expression and function returns a type flag along with a value. The "circuit board" for the interpreter would have a dual-channel bus that carries both value and flag. If the "wrong" type flag is received by an operation, such as multiplication, then most such langs seem to try to parse the operand as the needed type. This double-try approach is one reason why I prefer type-free languages: the flag doesn't help much and just adds another dimension to the mix.
- Name a single language that is type free. What the hell are you using, machine code? Languages need type systems, some of them just muddy the waters and hide the typing from you, like in PHP. EducationIsFailing?. I think one big problem is that the latest hip and cool programming languages (and their manuals) don't actually educate people what types are, they sort of brush them under the rug and pretend they don't exist. PHP for example is jokingly called a stringly typed language, not strongly. Hiding types in the language and pretending types don't really exist, doesn't help the cause. That's why it is good for people new to programming to use strongly typed static languages first, to learn about rich type systems. Then, if they must, they can go on to other languages that pretend to not have types, even though they do.
- Your comment seems out of place. I suggest you rework it or find a better place to move it. TypelessVsDynamic and ColdFusionLanguageTypeSystem may address some of your concerns.
- You said you prefer type-free languages. I asked you to name some type-free languages. The comment is not out of place. The page "cold fusion language type system", says that cold fusion has a type system, right in the page title itself, otherwise the page wouldn't be called "type system". That is basic logic. It should be renamed something else, if coldfusion is type-free. You are playing word games going around in circles. This is a SemanticWar? and you are redefining widely known, widely accepted terms. Do number types (widely known by children) not exist? Are number types in math representable by flags?
- Yes, numbers can be represented as flags. If you had enough education, you'd know this. As far as the title, I was merely attempting to increase the probability of what others know it's about. It's similar to calling something "country music" even though there's no clear-cut algorithm/rules for distinguishing between country- and non-country music. If's it's merely about "widely accepted", then it's ArgumentByVotes?. You have suggested it's an objective truth, not a truth determined by voting. Which is is? -t
Well, firstly, I don't understand what you mean when you say that
f = <expression>; is really
f<value><flag> = <expression>.
f is a variable that contains a value. I'm asking you what type that value has. The value is, of course, a function. What is its type?
Secondly, maybe I'm just being dim, but I still can't see that you've answered the question. Obviously if you know what type is fed to f as a parameter you can work out what type the result is. That's not the issue. I'm not interested in the type of the answer for a specific parameter. You are completely ignoring the question of the type of f itself. Which of these are you claiming:
- f does not have a type;
- the type of f changes according to the type of the argument fed to it;
- f has a type, but you don't know what it is.
- Something else.
I'm not asking what languages, dynamic or otherwise, do. I'm asking what the type is of
f.
So you are talking about at run-time when the interpreter encounters such a statement? Without testing them, most languages would make "f" a non-scalar type of "function". It is kind of like:
f = array();
Here, the type-flag of "f" is typically "array".
I just did a test in JS, which confirms this.
<script language="JavaScript">
var square = function(x) {return x*x;}
alert(square(4)); // result is 16
alert(typeof(square)); // result is "function"
</script>
No, I'm not talking about what interpreters do. That's a question of implementation, and is not a question of what types really
are. If you claim that types
are something, then you should be able to say what the type is of
f. If you simply say, "function" then I guess I misunderstood what you were trying to accomplish. It's a bit like saying 32 is a
Number, without saying whether it's an
Integer,
Float,
Fraction,
Complex,
Quaternion, or
FixedPoint.
I think it's becoming clear that you are talking about one specific implementation method for types in dynamic languages. Equally, I think it's a bit bold to claim that "types are side-flags". Possibly the renamings now being discussed are overdue.
It is not clear to me whether you are asking a philosophical question or a technical one. A philosophical answer will probably be of very little use in settling definition issues because everyone's psychology is different. I doubt you would consider my answer the baseline for all humans and visa versa. This is a problem ANY definition of "types" will probably face.
As far as what I am thinking, at first it is just syntax. After inspection, I would deduce that "f" is likely to be of type "function" after execution. This is because my brain can forecast some of what the interpreter will do. My brain acts as a semi-interpreter.
I may be being a bit dim, but this second paragraph carries no meaning for me. I can't understand what you're trying to say at all, so I'll ignore it, and perhaps you'll remove it and replace it with something that does mean something to me.
However, addressing your first paragraph, I thought it was clear that I'm asking a technical question. You've claimed "types are side-flags". I'm asking what the type is of a particular, quite specific value. You seem to be saying "function". If sounds like your side-flags give you no way to distinguish between the types of, say, lambda(x:x*x) and lambda(x:x[0]). As I say, it seems that you are describing an implementation technique for dynamic languages.
If that's what you're doing then this page should probably be deleted and you should salvage what you want and start again. A more accurate title might be "TypesInDynamicLanguagesOftenHaveTagsOrSideFlags", but then what you're saying becomes pretty self-evident. I'd be interested to find out what new and useful things you think you're saying.
I give up. I don't know what you are asking. I tried my best to interpret your question and answer it. If it is a technical question, then the nature of the language and interpreter/compiler makes a huge difference. How about we stick with JavaScript for right now since any reader with a web browser likely has access to an interpreter. The type flag is "filled" or "set" when the expression after the assignment operator (=) is evaluated. All variables in JavaScript have a type-flag that I know of. One can test the value of the type-flag by using the "typeof" function. There are also more indirect methods.
I was looking for an answer along the lines of "f has type A->A, where A supports the operation *" (or something similar). Or, in the second case, "lambda(x:x[0]) has type AxB->A". I don't see how these sorts of things can be represented as "side-flags", and I was hoping you could explain your ideas.
Sounds more like a specification or intermediate parse assist form. At this point, I don't see any need for JavaScript to do such (although other languages might). It is not an objective or necessary thing that I can tell. I suspect JS simply generates an internally-named function and returns a reference to that function. "f" is merely an ID number or name. If we look at it internally it might be a variable with an ID number as a value and "function" as the type flag. Whether the referenced function itself is valid or not depends on the usual function rules. As far as how YOU mentally read the lambda, I have no control over that.
So it's true. Your "side-flags" are nothing more than a specific implementation technique for some dynamic languages, apparently specifically JavaScript. You originally seemed to have been claiming more, and I was trying to see what, and how. Now it is clear.
I doubt ANY definition of "types" can be clear without referencing some concrete *representation* (ARGUMENT FROM EGO). TypeTheory uses a concrete notation also, but assigning actual items to those symbols is subjective. If I am not mistaken, it is also DAG-centric (YOU'RE MISTAKEN), and types don't have to use DAGs.
- {Your doubts do not an argument make. Most accepted definitions of "types" do not reference any concrete representation for the types. Experts in type-theory understand quite well that no 'official' concrete representation is necessary. For computation purposes, one can use any concrete representation one desires so long as it allows one to encode any type utilized in the type-system.}
- There's no way to test that statement without a concrete representation. And experts in YOUR type theory.
This page really, really needs renaming. I'm no longer convinced it says anything beyond:
- In some dynamic languages, when a collection of bytes is accessed the interpreter needs to know how to treat them. This information can be stored in "Type Tags".
Is there any content beyond that summary?
TypesAndSideFlagsDiscussion (note "and") has a list of candidate names.
Moved further discussion to TypesAndSideFlagsDiscussion (note the "and") because this topic is TooBigToEdit.
See Also: QuestionsForTypeDefinitions, TypesAreNotTypesTheyAreTypes, TagFreeTypingRoadMap
AugustZeroSeven
CategoryLanguageTyping, CategorySimplification