From http://www.andrew.cmu.edu/user/cebrown/notes/reynolds83.html
"Reynolds presents a fable in which two professors define the complex numbers in different ways (<a,b> for a+bi vs. <r,t> for r(cos(t)+isin(t))) and define the primitive operations (and prove the basic properties of these operations). Then, the two professors classes get switched, but there is no confusion because the two professors always work at the proper level of abstraction using the basic properties of the complex numbers and the primitive operations without referencing the definition. The moral is that: "Type structure is a syntactic discipline for enforcing levels of abstraction." Instead of talking about "the" complex numbers, we should talk about a type "Complex" which can be realized in different but related ways."
(Note: The word "fable" in the title is used in the sense of "a short story devised to convey some useful lesson", not "a fictitious narrative or statement" or "a fiction invented to deceive" (definitions from OED). The point of the story is to show that TypesAreAnAbstractionOfReality, not to imply that TypeTheory is somehow false or useless.)
This justifies the level of abstraction. Imagine the two professors were two programmers that worked in a statically typed language, and worked not on theorems but on algorithms. By declaring that the variables on which they operate were of a type "Complex", by that they declared the abstract contract between the provider of the concrete complex numbers or set of numbers and the writer of the algorithm. That contract may have included the primitive operation on type. Such example of statically typed syntactic contracts are Java interfaces, ML parameterized modules, Haskell TypeClasses.
Thus we have a type as an abstraction. Complex does not represent a concrete set of bits, but any opaque set of bits that comes also with the associated operations, therefore it is a more abstract level from the level at which the algorithm (or the theorem prover) manipulated a concrete representation of Complex, say where the first 8 bytes represented the double representation of "X" and the next 8 bytes represented the double representation of Y from X+Yi. It is a syntactic tool because in a statically typed universe, syntactically, once the first professor promised to write the theorem generically against complex types, he would have been prevented by the syntactic analyzer (type checker) even he wanted to revert with a slight of hand, back to the X + iY notation. Therefore it enforces a level of abstraction, you can't go back once you promise that you write your algorithm using a specified abstraction.
That's the justification for the simple definition in WhatAreTypes.
That is a description of a typical usage of types, not a definition. It does not conflict with AlternativeTypeDefinitions, or at least "side flag theory". --top
Why the use of "syntax"? What if one is using a run-time engine directly? No syntax. The run-time engine for strong-typed language still has it's typing system. Nor is it a very readable definition. "Abstraction" is too vague a word to use in a precise-intended definition. -- top
Why the use of "syntax"? Because by the time syntax expression are transformed into run-time values the enforcement can take place (like in Smalltalk and Lisp) but is no longer considered a tool for abstraction it's a tool strictly for enforcement of invariants in the run-time engine, they remain a tool for abstraction even in Lisp but it's strictly in the head of the programmer and maybe in the documentation. They are no longer tool in the language as the language does not see them, they do not exist as an element in the language. As opposed to DynamicTyping in StaticTyping types and type definitions are an important element in the language. DynamicTyping bet is that programmer's head is a good enough tool to operate on types without any support from the language. Dynamic typing programmers (that would include me, on occasions) still think in terms of types as abstraction, long before the runtime checking happens, but it's like Las Vegas commercials: "what happens here, stays here". The runtime checking (it's not really a tun-time type system, it's a runtime checking system that ensures that the VM does not go kaboom as result of an error, and nothing else) only serves as a safety net in case something goes wrong, but does not help enforce that the type abstractions are used correctly, so when the abstractions are used incorrectly it helps limit the damage.
- I was not necessarily referring to languages with implied typing. That is another issue. Suppose we were looking at a strong-typed language that compiled source code into byte-code. If we only had the byte-code, the type system would still be active and do all its enforcing, yet there is no "syntax". It is a data structure and all the type enforcement is done by look-ups in the various structures. It is possible to create new bytecode without writing text code first. The syntax is just a shorthand.
- No, I wouldn't call that a typing system. That's just a bunch of runtime tests.
- Are you saying that typing must be compiled? If one programs with syntax then they are using "types", but not if they program directly to an internal structure? (Compilers also build an internal structure before they convert it to machine code.) And that it is not the case that DataAndCodeAreTheSameThing? Boy, I smell a loooong topic ahead.
- If you smell a loooong topic, please post it to the proper page. It has nothing to do with WhatAreTypes.
- Well, I say syntax has nothing to do with "types". If John's definitions are only about syntax, then I believe his definitions to be limited to a syntax-centric view of everything. It does not matter how much math he made out of his view, it might be a waste or limited. SyntaxFollowsSemantics. One can program using typical type systems without ever using syntax.
- Your opinionated rants have nothing to do on the matter. Your affirmations are backed by nothing but handwaving, whereas TypeTheory is backed both by 3 decades of research and peer reviewed activity by some of the top notch programmers and mathematicians, and by concrete, high quality results. If you had the time to develop your pet theories to somewhere remotely intelligible rather than just assertions out of thin air, maybe I could help you understand where your flaws are.
- The "syntax" issue is related. Like I said many times, just because one can make a lot of math out of their pet definition does NOT make it the "right" definition. Definitions are decided by vote, for good or bad, NOT MATH. That is the way it has always been. Definitions are created by actual use, not by elitists in an ivory tower. Sometimes they coincide, sometimes they don't. You are the one with the problem if you don't understand that, not me. You confuse math, science, and definitions such that you don't know the difference and where one starts and the other ends. I can understand knocking a few points off of a definition because it might lack a "math" surrounding it, but not all 10. MisuseOfMath. And, you have not given concrete evidence that common languages are significantly shaped by his theory (see TopOnTypes). You only say it and expect us to believe you.
- You know TOP, finally you have something to say where you are absolutely right, but not quite like you expect. Definitions are decided by vote. Except that in science there's no national election. So the vote is somewhat indirect: it depends on who decides to publish what in reputable scientific publications, as well as in forms of books under the auspices of scientific publishers. It also is an indirect vote by professors at prestigious universities deciding what definitions to follow, what books should be used as textbooks. There's also the peer review and the citation mechanism: which researchers decide to cite another researcher and develop a particular direction. Some ideas just wither away because nobody wants to follow them. It is also a vote in the marketplace by PhD graduates who followed those professors and then are working for Microsoft, Sun and others less important (including research institutes like INRIA in France or research universities in America) when they decide what will be the shape of the next programming language. That is the vote. And if I begin to count, I think I might discover the score JohnReynolds 1000000 to TopMind 1. Actually the fact that on a respectable engineering forum we have an endless debate of TopMind versus JohnReynolds is a lot in your favor and even detracting from the stature of John Reynolds. This is solely due to the "anything goes" nature of wiki. In a peer reviewed publication environment you wouldn't even qualify to start the match.
- I did not invent most of the alternative definitions, I simply collected and compacted them. Also note that definitions are not necessarily based on the (perceived) quality of the source, simply the quantity. For example, "quantum leap" has entered into mainstream usage meaning a giant change. However, its origin in quantum physics generally referred to very minute discrete changes. Further you do not seem very interested in non-syntax-related issues with regard to types. If Reynolds' definition does not "work" outside the world of syntax, then a yellow flag should be raised. However, you seem more concerned about defending prestige and pedigree than pondering interesting questions.
- You incorrectly think of them as definition for types, you may have collected them from hearsay and endless wars on comp.object, but redacted them sloppily. You bet that I do not seem very interested in wasting time, and I have not seen an interesting question on your part, to ponder on. No yellow flag needs raising because John Reynolds definition is tied to syntax, this is well accepted in TypeTheory, and every formal language has a syntax. So there's no problem until you actually can prove that there is a problem.
- So you think that Microsoft's CLR has no typing? As soon as somebody forever loses the source code, a CLR image or something like it (see RunTimeEngineSchema) has no typing in it? I think a definition tied to syntax is limiting. Syntax is just a skin to a data structure(s) of some sort. When a compiler or interpreter is checking the type of something, it almost always does a structure search, not a syntax search (except maybe for constants). If that does not raise any concern within you, then so bit it. If it seems this issue is of no concern to other WikiZens, then they are welcome to move the bulk of this issue to TopOnTypes. But I find such a syntax link a disturbing curiosity.
- Microsoft's CLR is an interesting case: it is an intermediate language, therefore by itself, it has a syntax. Then those types that get written into the assemblies come from a type checker still in a syntactic driven universe. In order to facilitate separate compilation and runtime checking, type by themselves get written into the intermediate code. The typed universe imposed by the CLR with its syntactic constraints, and strictly static OO view of the world, makes it somewhat difficult (impedance mismatch) to compile into CLR, dynamic languages and languages with a different (even more powerful) type algebra. Microsoft Research is working hard on that, they published a lot of papers and they promised more flexible CLR in next versions.
- Let's explore specifics. Suppose we created a program using the RunTimeEngineSchema. There is no "source" in the traditional sense. Type enforcement is based purely on look-ups. Is this outside of Reynolds' definition?
- Yes, you got your enforcement, all right. That is not the problem. Kind of clumsy but it may work, however you haven't gotten your discipline. A scheme interpreter that would do something similar would be a freshman exercise at MIT. You've got yourself something vaguely equivalent to a primitive runtime. But that ain't a typing system. What is your paramType supposed to reference? What would be a typing judgement? What's the body of a routine and how do you judge that the body is well typed? How do you help developers to write polymorphic code (the same code that works for different kinds of objects/data in the same way)? There's lots of holes in your sketch of a system, and you risk reinventing some square wheels along the way if you try to fill them. An introductory reading on TypeTheory would be a much better exercise.
- I don't know why you keep pointing out that it resembles a "freshman exercise". It is only meant as an example. I know that and so does anybody else who bothers to read it. I already described what "paramType" is multiple times. If it is not clear, then please address it there. And, I don't know what the body of a routine has to do with the price of beans. If it takes an entire book to explain your position, then it is probably vague or flawed. Be a man and stand on your own words rather than mouse around behind a scribbly book.
"Abstraction" has a well defined meaning in English language and only you have problems with it.
- It is fine as a "street" definition, but not precise enough for computer science. Abstraction is relative. For example, we might not care how "integer" is implemented, but if the implementation limits us to 9 digits, then we have to care.
- Now, TOP. When have I ever seen you using a formal definition? The definition for abstraction is precise enough even for computer science. All mathematical books operate with a bunch of intuitive concepts that are implied (by sending the reader to other books) or considered self sufficient without getting a formal description. In this case abstraction means that things are left out and only essential properties about some expression are used to operate on. That's abstraction. The guy who expresses the algorithm has no clue what is the content of the elements sorted, he's using the "abstraction" that there's this compare function that compares two elements. Then this abstraction fits any number of concrete realizations like strings, byte arrays, dates, integer, etc. The same kind of "abstraction" is used in Mathematics, when for example one uses Galois' theory that proves things about fields using just abstract axioms about the elements of the fields and then those theorems can be applied to the field of complex numbers which is a particular instance of the "field" abstraction. It's very clear and simple, case closed. You can bitch and moan all you want about it, but that's all there is.
- And to your comments about the 9 digits: yes, if the implementation does not match the abstraction you have a broken program. If the contract that is specified in the abstraction requires you to provide values that implement unlimited precision arithmetics and you provide just limited precision, then yes, you broke the abstractions. Pay attention to what the abstraction specifies, fix your bugs, and voila, it works.
- That is validation, which is an alternative definition of "types".
A trivial yet familiar to everyone example when a sorting algorithm in Java sorts elements that are characterized as "Comparable", "Comparable" stands in as an abstraction for (string, Integer, Date, etc), therefore the sorting algorithm is no longer specified in terms of the concrete element but in terms of an "abstraction" that hides the irrelevant details (like the layout of strings in memory) and presents to the programmer only the traits that are essential to the sort function, i.e. the compare operator. That matches the definition in the English dictionary: to abstract is to take away irrelevant details, and to talk about some set without referring to concrete elements but to relevant properties thereof.
This is just saying that the elements must answer certain messages as a minimum requirement to be usable within a given framework. A declarative way to achieve something similar to that is that some things must provide a string representation (ASCII or Unicode) by which they are compared. I sometimes do this in database-centric systems that don't offer or connect with a "comparable" behavior interface. Part of the PowerOfPlainText.