Branch of WhatAreTypes
According to TopMind, there's a wide variety of "types" or definitions for types in programming language definitions. Of course, we have TypeTheory and programming language theory having a very consistent view on the subject, but nevertheless here they are:
- Note that I did not create the original material that this topic is based on (other than the "side flag" one I added). It used to exist on another topic before I ever contributed. I don't know who started the original collection of definitions. Personally, I don't think it matters, but some WikiZens believe the source of a list or idea is important such that I am describing the sources as best I know. -- top
Candidate definitions allegedly collected by Top:
- John Reynold's definition
- Types are behavior allocated/permitted based on a hierarchical taxonomy
- Types are a many-to-many relationship between all potential values and all potential operators that can go together.
- Types are another word for "validation" (see bottom of ThereAreNoTypes).
- Substitutability? (DuckTyping)
- A hidden or "side" flag(s) on variables used to alter or select behavior (TypesAndSideFlags).
- See PageAnchor tag_dispute_02 for a complaint against this.
- A way to reduce or eliminate ambiguity in polymorphic dispatching
- TypesAreContracts
(Please don't embed disputes in the list itself. It serves as a table of contents.)
Of course, the claim that Top has collected these is of no use if he cannot provide some proper context (at least proper citation for the source, the importance and usefulness of the source, etc). Otherwise, we can go on a spree trying to collect what's everybody's opinion of how should we define atom, but that doesn't mean that all the opinions collected warrant a representation on a wiki page. As WardCunningham puts it, wiki does not offer a guarantee to everyone that everybody's 2c will be preserved for eternity like in a mausoleum, regardless of merits and value for wiki.
What does it mean to respond to "WhatAreTypes?" with "Substitutability"? It means nothing. It's purely and simply crap. It's top's in-your-face laziness trying to articulate that in some type systems substitutability is an important aspect related to types and there are different approaches about how type systems should solve substitutability (ManifestTyping with names and explicit declarations, versus DuckTyping, etc). But he cannot in all honesty claim that "substitutability" is a definition for types or an explanation about what are types.
I may be providing a UsefulLie whereas JohnReynolds may be providing a UselessTruth (at best). This approach may be more usable for newbies. Further Reynolds' work is not really a definition, but rather a model under which to analyze types. The implication is that something "is a" type if it can be reworked to fit Reynolds' model, but not if it doesn't. For one, this is not how "normal" definitions work. It's mistaking models for definitions. Second, it could be that one's lack of creativity is preventing them from fitting (re-mapping) something "into" Reynold's model, not inherent lack of type-ness. We want to determine "type-ness", not human ingenuity. It's a disturbing problem that my detractors do not appear to wish to resolve. Even if you are crazy enough to be satisfied with "it's a type if it can be made to fit Rennie's model", that still makes it a bigass 100+ page definition, which would make normal researchers strive for a shorter variant. Excessively long definitions are a smell. -- top
And, of course, Top does not realize he doesn't argue for another type definition, but for a design methodology that within a given language/ programming environment is able to help the developer to identify the types. Therefore he protests the OO methodology of trying to identify a hierarchical taxonomy in the real world that translates into OO "types".
I agree that this is a valid criticism. But the criticism however cannot be applied against TypeTheory because the theory offers you just the tools, and the tools are valid and adequate when put to good use. The criticism is against the OO method for using the tools. So he barks up the wrong tree.
The people who set out to make say SmalltalkLanguage were not interested in implementing any math-based type theory.
Yes, Smalltalk choose not to use the tools given by TypeTheory. It's a trade-off that smalltalkers consider worthy to make. As a matter of fact, David N. Smith, a noted authority on Smalltalk, is on record in his Smalltalk FAQ that Smalltalk has classes, but it has no types in the proper meaning of the word. Does any Smalltalker around have a copy of David Smith's Smalltalk FAQ?
But some Smalltalk fans do claim that Smalltalk has "types", just not in the sense used in other languages. I invite them to explain in more detail.
It might be more accurate to state:
- The earliest versions of Smalltalk were developed in the 70's, which is about the same time that TypeTheory came out into its own as a sub-discipline of ComputerScience.
- Many of the useful results of TypeTheory (how to perform TypeInference and other static program analyses) are of little applicability to Smalltalk, due to its highly dynamic nature. Attempts to add static typing to Smalltalk (ie StrongTalk) have met with limited success.
- Many things that Smalltalk does (true FirstClassTypes, for instance) make TypeInference (in the general case) undecidable.
However:
- A consistent typing system can be developed for Smalltalk - actually two. One (a very weak one) is based on classes and inheritance; the other (a very powerful one) is based on DuckTyping. Of course, Smalltalk implementations don't perform type judgements per se, but DynamicDispatch in Smalltalk performs mini type judgements as needed.
Re: (at least proper citation for the source, the importance and usefulness of the source, etc)
Reliance on ArgumentFromAuthority is problematic.
- It's not reliance on ArgumentFromAuthority, it's asking you to provide a context so that we can double-check your claims that you really collected significant differing views on WhatAreTypes. Otherwise, we can go into MacArthur Park in Los Angeles and ask every other person what he thinks of types for 2 hours. Collect all the opinions and put them here.
- That might be a useful exercise, for it helps in the goal of seeing if formal definitions match informal definitions, and if not, why. However, I would rather see a summary rather than every response.
- Citing research isn't ArgumentFromAuthority, top. Now, if some professor were to offer up unsupported conjecture on a subject, and we were to quote such, that would be ArgumentFromAuthority. But referring to scientific results (in other words, the science and not the man) is not AFA.
Also, I have not seen any clearly superior definitions that fit within say 2000 words. If a definition takes an entire book, it probably has very limited practical use.
Last I checked, a reasonable and thorough explanation of the RelationalModel requires more than 2000 words (ChrisDate's book is over 1000 pages). Can we assume the RelationalModel has limited practical use then? The notion that ScienceShouldBeEasy is a well-known fallacy.
Date's book is mostly about explaining techniques and issues, not defining. Relational can be reduced to a small set of primitive operations similarly (but not quite as small) as to how Boolean Algebra can be defined compactly. -- top
- Really? Can one explain all of the following topics in 2000 words or less, and do so well?
- Definition of relations, tuples, attributes, and domains
- The RelationalAlgebra (select/restrict, project, join, union, intersection, etc.), and the closure properties (based on a small set of fundamental operations/tranformations). Formally equivalent to RelationalCalculus (the two are inter-convertible)
- Not sure this is required for a root definition. (It is. -- Doug)
- Please clarify "closure properties"
- This is the minor thing (I don't mean that sarcastically - it's important, but only a small change) that distinguishes relational operators from ordinary set operators. ChrisDate AnIntroductionToDataBaseSystems? 7th ed. sec. 3.2 pg 60: "the result of each of the three operations is another table (in other words, the operators are indeed "operators that derive tables from tables," as previously stated). This is the closure property of relational systems, and it is very important. Basically, because the output of any operation is the same kind of object as the input - they are all tables - so the output from one operation can become input to another. Thus it is possible, e.g., to take a projection of a join, or a join of two restrictions, or a restriction of a projection, etc., etc. In other words, it is possible to write nested expressions - i.e., expressions in which the operands themselves are represented by general expressions, instead of just by simple table names. This fact in turn has numerous important consequences, as we will see later (both in this chapter and in many subsequent ones)." [italics in original.] -- Doug
- He further clarifies in chapter 6 in discussing the RelationalAlgebra operators. P157, under Union: "although that result [of a union of two different kinds of tuples] is a set, it is not a relation; relations cannot contain a mixture of different kinds of tuples, they must be "tuple-homogeneous". And of course, we do want the result to be a relation, because we want to preserve the closure property. Therefore, the union in the relational algebra is not the usual mathematical union; rather, it is a special kind of union, in which we require the two input relations to be of the same type -- meaning, for example, that they both contain supplier tuples, or both part tuples, but not a mixture of the two. If the two relations are of the same (relation type), then we can take their union, and the result will also be a relation of the same (relation) type; in other words, the closure property will be preserved. [Footnote]: Historically, most of the database literature (earlier editions of this book included) used the term union-compatibility to refer to the notion that the two relations must be of the same type. This term is not ver apt, however, for a variety of reasons, the most obvious of which is that the notion does not apply just to union." [emphasis in original] -- Doug
- Perhaps this discussion belongs in another topic, such as DoesRelationalRequireTypes or TupleDefinition. I find it odd that a union-generated table should be treated differently than any other table. A created table, virtual or actual, should stand on its own. Pedigree should not be an issue.
- You are correct, nor is ChrisDate saying otherwise. You simply misunderstood the quoted text, somehow. The point is just that relational operators accept tuples and produce tuples (that's the closure property), whereas set operators accept sets and produce sets, and that the two are not precisely the same even though they're very similar. When you do a join, typically you want the result to be "flattened" - each row a simple tuple of simple values, whereas the set version would product rows of tuples containing nested tuples. (ChrisDate does not rule out nested tuples altogether; that's a different topic.)
- As for which page to move this definition of relational closure to, yeah, maybe it's not best here; it's part of the definition of relational databases, so putting it on a very general page might be in order. It's a digression from this page's discussion, certainly. -- Doug
- The RelationalCalculus -- based on logical predicates (forall X such that PREDICATE, there-exists some X such that PREDICATE, AND, OR, NOT). Formally equivalent to RelationalAlgebra (the two are inter-convertible)
- CandidateKeys, ForeignKeys, relational constraints, ReferentialIntegrity, and FunctionalDependency
- These are "side issues". The only required key is the primary key.
- BS, top. Perhaps only a PrimaryKey is required for a standalone table (and all that MicrosoftAccess supports), but for a multi-table database which exhibits ReferentialIntegrity, you need to understand the rest of them.
- You are confusing "definition" with "usage in practice". One can write sloppy run-on BooleanAlgebra for example, but it is still BooleanAlgebra.
- Normalization (FirstNormalForm, SecondNormalForm, ThirdNormalForm, BoyceCoddNormalForm?, FourthNormalForm?, FifthNormalForm?, etc.)
- This is also an implementation issue. It is applying OnceAndOnlyOnce to table design issues. A bad schema is still a schema.
- No, it's a fundamental issue in order to understand databases. Perhaps the higher-order normal forms aren't quite so important, but anybody should know what ThirdNormalForm is if they wish to understand RelationalDatabases.
- You are confusing "definition" with "usage in practice". Bad schemas do NOT necessarily violate the definition. OnceAndOnlyOnce (normalization) applies to almost every paradigm and technique. It is not unique to relational in any way. The normal forms are merely tips or steps to identify and correct duplication problems.
- Transactions, Transaction semantics (ACID), rollback, etc.
- I am not sure these are part of the base relational theory, but rather implementation details. AcId can also be applied outside of relational. For example, a drawing program in which multiple users can work on the same image. If a series of transformations cannot take place without violating a given set of rules, then it can be rolled back.
- OK, considering I said "model" and not "database" above, I'll buy this.
And that's just the basics. How about:
- Mapping of business rules and such into relational schemata (E/R mapping, etc.)
- Database administration
- StructuredQueryLanguage, other languages for specifying schemata and issuing queries
- OLTP vs OLDP
There's a
lot of material there. While it might be possible to cover the above (or at least a reasonable subset) in 2000 words or less, it would be a
very cursory treatment. Methinks it would be easier to provide a gentle introduction to
TypeTheory (again, a cursory treatment) in 2000 words or less.
Especially if I'm allowed to elide the complicated parts on the ground that it's an ImplementationIssue?. TypeTheory, like the RelationalModel, both have SetTheory as their basis.
I like to compare relational with BooleanAlgebra: you have well-defined operands and well-defined operators. Relational operators are perhaps not as "final" as Boolean because new ones have not been ruled out, but the concept is similar. Related: CriteriaForGoodMathOrCompactNotation -- top
(Moved from BadIdeasShouldBeKept?)
Re: "Just because one can make a lot of math around an opinion does not change the the fact that it is just an opinion."
See also: LaynesLaw.
A curious form of the EveryoneHasHisOpinion fallacy. While ArgumentFromAuthority is also frequently a fallacy, in a scientific context ArgumentFromAuthority is far less fallacious when quoting a scientist in his/her area of expertise, whose writings are the result of solid, peer-reviewed research. To suggest that scientists cannot base work on the discoveries of other scientists would completely undermine science. It's anti-intellectual PoMo nonsense.
- On the other hand, just because a renowned scientist says something doesn't make it true. Even StephenHawking recently recanted many of his views on black holes.
- Correct, and the ScientificMethod anticipates this. Of course, any scientist who offers opinions on any matters would do well to separate conjecture from conclusion. The latter should be backed by axiomatic proof or empirical evidence, depending on the discipline in question. The former need not be. Confusing the two is one of the ScientificSins.
- {ProgrammingIsInTheMind. It is not an objective external thing to measure, such as the speed of light or the weight of the moon.}
- The underlying mathematics are not "in the mind", even if programming might be. See below.
- Underlying to what? Many useful and popular languages were created without any thought to the books you mention.
- Certainly, many programming languages predate TypeTheory. Others have very simple type systems - Smalltalk is one example, with only one "true" type - Object. (This is why some folks refer to dynamically-typed languages as "untyped" - a notion I disagree with). One can retroactively view a collection of message signatures - called a "protocol" or an "interface" in other languages - as a type, and if you did so you'd come up with a consistent type system. Most modern languages, however, are designed by folks with at least a working knowledge of TypeTheory - even if they don't have fancy TypeInference engines or otherwise use techniques on the bleeding edge of the theory.
- Below I asked to see specific examples of how it has allegedly improved, or at least changed, the designs of such languages.
On the specific questions of WhatAreTypes: TopMind could do far better than resorting to the immature, demagoguish technique of dismissing out of hand the authorities on the matter (and using diminutive nicknames like "benjy" for BenjaminPierce) - by employing this technique, Top is wearing the robes of a pseudo-scientific snake oil salesman.
- Not nearly as "immature, demagoguish technique" as outright deletion, which is a far bigger WikiSin? in my book.
- I'll ignore the question of whose WikiSin?s are mortal and whose are venal; it's irrelevant here.
- Why are some people's WikiSins? highlighted but others to be ignored?
Top could point out, for instance, that there is no universal set of primitives - there are an infinite number of ways of constructing the natural numbers (ChurchNumerals?, PeanoArithmetic, etc.) - any practical type system must either ignore this (and assume the existence of numbers) or state how they are constructed. Drilling down a layer, as many such systems are based on SetTheory - one might ask which set theory? ZF? Or something else.
Of course, all of this is abstract minutiae, and stuff Top probably has no clue about.
- The author in question indeed did invent or collect a huge amount of terminology about "types", as he sees it which I don't claim to have mastered. But this does not change the universal fact that definitions are just opinions. And further, definitions that take an entire book to explain are probably not very human-friendly definitions anyhow, making them less practical.
And, natural numbers don't even have to be part of a programming language. They can be farmed off to add-on libraries.
Let's leave that as an exercise for the FunctionalWeenies :)
In some ways, TypeTheory is a meta-theory, as one can construct numerous different type systems. Speaking of a "type" without an associated type system is meaningless. In the case of programming languages - they all define their type system, so speaking of a type in the context of a given programming language is quite useful and meaningful.
Are you suggesting that a clear language-neutral definition is not possible?
I think a fundamental part of the problem of the current debate is that TopMind and CostinCozianu are talking past each other here. Two issues are being confused:
- The soundness of TypeTheory
- Its applicability to programming
In the case of the first one,
TypeTheory is a mathematical discipline subject to axiomatic proof, and as such is beyond reproach from the likes of
TopMind. The work that
JohnReynolds,
RobinMilner,
LucaCardelli,
BenjaminPierce, and others do in this field is applied math; if you wish to argue against it you should have some proofs of your own ready. Were someone to successfully
disprove TypeTheory, it would be a significant result indeed.
- Another option is to define a proof-giving system using a different approach, such as "validation theory". Having multiple notations for more or less the same thing does not invalidate alternatives. -- top
- One would hope that such an approach would be theoretically sound. I have no idea what you mean by "validation theory". Tools which aren't theoretically sound are of questionable use.
- I have to disagree with that. Something may map better to the human mind(s) than to a mathematically derived system. Newtonian physics is not as sound as Relativity because it is not as accurate, but more useful to the vast majority of humans because we can relate to it better. Even a toddler banging away at toys is learning Newtonian principles. See also WorseIsBetter.
- But only because Newton remains highly accurate at low velocity. If Newton's laws of motion were shown to be grossly inaccurate, they would be useless. Worse than useless, in fact. There are many scientific theories out there - phlogiston, for example - which have been consigned to the rubbish heap because they are flat-out wrong. At any rate, if you think TypeTheory isn't useful, that's your problem. :)
- {So you are agreeing that Newton's Laws are a UsefulLie. Is not possible that other definitions of types are also useful lies? }
- All models are UsefulLies, and that includes mapping real-world entities onto an axiomatic type system. However, some lies are more useful than others - to point out that a given model is deficient in some regard when it comes to modelling reality does not suffice to discredit the model; or to give other models, which might be complete BS, equal weight.
- Then we need a better BS-detector. Making a lot of math around an idea does not grant that idea a monopoly. Better-explored, perhaps, monopoly, no.
- Again, you're obscuring the issues. Is TypeTheory an internally-sound branch of mathematics? Definitely. How applicable is it to programming (both language implementation and application development)? That depends.
- Since we know that some things that may not be provably internally sound are still useful, why exclude such things as they relate to types? We would have to delete 95% of wiki if that is the criteria.
- I suspect its use may be related to the problem domain. It may be more useful for modelling unwavering laws of nature than human-personality-centric domains, such as business. And, I am saying that there may be multiple ways to talk about types, regardless of their usefulness.
- Or you can talk about different concepts altogether. If you think about another concept with your taxonomies and everything, give it a different name. In the context of programming languages, types already have a well-defined and widely recognized meaning.
- And I am already on record supporting your idea that in some domains programming language types are not adequate tools that can map one on one with classifiers in the problem domain. Which may be a mistake that some OO programmers make sometimes (but not always) when trying to map every classification from the real world into a corresponding OO class (and consequently into a type). But this criticism of yours has nothing to do with criticizing types as they are understood in TypeTheory. What's so hard for you to get?
In the case of the second issue, there is much debate. The correctness of
TypeTheory says nothing about whether or not one is more productive with
StaticTyping or
DynamicTyping - questions of
productivity are economic and human-centric in nature - part of the reasons why
ComputerScienceIsaSoftScience. The debate uses
TypeTheory (a hard discipline) as a foundation, but involves many other factors.
In short: ypeTheory is a tool. Whether or not it's a useful tool has little to do with its correctness (assuming it is correct, of course - if TypeTheory itself were a load of codswallop, it probably wouldn't be useful to anybody).
My opinion on the matter: ypeTheory is indeed a useful tool in many contexts. In some, it probably should be a required tool, where the increased ability to reason about a program's behavior that type theory affords might make the difference between life and death. If you're doing a CrudScreen, on the other hand, I'm for leaving it up to the programmer(s) and customer(s) to figure out what practices are most likely to produce the desired result in the most cost-effective fashion.
PageAnchor: typesAsFlags
In a practical sense, "types" in strongly-typed languages are basically just a little hidden or not-readily-seen flag(s) in each variable (compile-time or run-time) that is used to indicate what operations can be done with that variable. Variables that may be derived from compound or multiple types must use various algorithms to determine the final type and/or which operations are valid. Some languages will require a final single "type", while others use hierarchies and other means to allow a "type path search" to be done on a derived variable to know which operations are valid.
In some languages, the operations and their types must be "registered" together ahead of time somehow, while others allow each user of a type to determine how it is going to handle a given type on its own.
Sometimes, more than one flag may be needed. For example, if we declare "foo" as type "int" (integer), then anything of type "foo" is both of type "foo" and of type "int". Type systems are basically the management of the trees and/or graphs that determine what type something is or what operations are allowed on it.
More examples are given below.
For the information of Top, quite a large part of the body of knowledge called TypeTheory has found its way into large-scale industrial languages that dominate the current programming landscape. I'm referring here to Java 5 and CeeSharp (the version available in VisualStudio 2005). So let's rephrase some of the above claims to adjust them a little bit to reality. Shall we?
What is a practical example of how it has improved them? Also note that I did not claim that no popular language was influenced by the work.
Some more specific examples of how TypeTheory has informed/improved programming language design:
- TypeInference simply wouldn't be possible without it.
- These concepts were used long before referenced "type theory".
- Uh, no. TypeInference was not possible, and did not exist, until TypeTheory had gained some traction. The first language with TypeInference (MlLanguage) was developed by (among others) RobinMilner, one of the founders of the discipline.
- I'm sorry, but this is nonsense. Algol68 had type inference in 1968.
- And you can find the beginnings of type theory in the correspondence between Russel and Frege as far back as 1902. The SimplyTypedLambdaCalculus? was developed in 1940.
- Generics (useful implementations thereof)
- TypeTheory does put a significant part of OO on a formal footing (another reference, I suppose, to the OoLacksMathArgument).
- It does not seem to help in practical design issues and dynamic OOP languages such as SmallTalk.
- It guides language designers on how to decide type systems that are a) decidable, and b) efficiently decidable.
- What is an example of with and without "proper" use, other than dumb design mistakes?
- TheThirdManifesto has quite a few roots in type theory (in particular, the "other orthogonal" aspects).
Moved from MostHolyWarsTiedToPsychology.
Nonsense. It's mostly TopMind using "psychology" as a trick to get some kind of status from where to pretend a reasonable disagreement with mainstream ComputingScience, for example TopOnTypes, RunTimeEngineSchema, etc. Most people know when it is the case that different approaches amount to a difference in style and culture, when a criticism is valid, or when somebody wants an argument ad perpetuum.
If it is a cut-and-dried situation, let's see the evidence. I don't deny that there is a lot of math surrounding "type theory", but if it is tied to only syntax, it is possibly flawed. Besides, types aside, there are still many other topics that are dependent on psychology so far. PsychologyVsCutAndDryPoll.
Again, Costin and TopMind are arguing past each other somewhat. Costin refers to TypeTheory, a well-grounded mathematical theory of some import in ComputerScience. Top seems to think that it has questionable relevance to programming - a claim which can be made without discrediting TypeTheory itself. However, Top's arguments against application of TypeTheory to programming and programming language design is that "some people don't like it". While it's undoubtedly true that some people don't like it, other people find it extremely useful in modelling the user domain on a computer system. Proponents of TypeTheory as a useful programming tool do not carry the burden of convincing 100% of practicing programmers, DBAs, etc. of its merits.
Perhaps we should move this somewhere else, but my complaint is not so much about usefulness in this case (that is another story), but about it being tied to syntax. One can have identical "typing" features using non-syntax-based "languages". Just because one does it in a structure instead of text should not change the nature of "typeness". That would be silly. ("Typeness" as defined in the strong typing language sense.) -- top
- I have an argument to discredit TypeTheory, so as to add to the argument that it has little to do with programming: it is anchored in a Platonic world that is not relevant to actual Reality where one programs on some form of TuringMachine. These two spaces are completely different, and have evolved two different poles that the whole ComputerScience field hasn't sufficiently separated so that their differences can be appreciated: logic gates vs. logical "calculus" and symbols. There is very little crossover as I can see. Perhaps the problem is the common use of the Greek root logikos. In the former,"logic" pertains to binary arithmetic, where in the latter, it retains its original Greek meaning relating to *speech* and symbols, i.e. logos. This is why there is no relevance! Further, one can notice that in the former, the progression has been towards more sophisticated DataStructures (hence the evolution towards ObjectOrientation), where in the latter, I'll argue, the progression has been towards function sophistication (where recursion seems to be paramount). A final observation to make is that the former makes common use of FloatingPointArithmetic?, whereas you hardly ever see mention of floating point in TypeTheory and LambdaCalculus, if at all (again, because it's not rooted to reality!) I've been presenting this same argument over at the comp.science.types. Further, this historical mathematical subfield has misinformed the evolution of ObjectOrientedProgramming, so I've made ObjectOrientedRefactored. -- MarkJanssen
- FacePalm
TypeTheory applies even to (well-designed) languages that don't have StaticTyping (manifest or otherwise). As mentioned on another page; SmalltalkLanguage has consistent type semantics, even though the syntax of the language makes little mention of "types". The syntax of Smalltalk does provide several explicit operations which have impact on types. When an object is created, one must send the "new" message to its class; usually the class is explicitly named (though "aClass new" is perfectly legal in Smalltalk - the equivalent is not legal C++ or Java). Inheritance is another (inheritance is always made explicit), and defining a method within a class is another (if a class understands a given message; than it's a subtype of the implicit type of all classes which understand that message). It sounds like the argument reduces to how much type annotations should be provided by the programmer; which is a far cry from suggesting that TypeTheory is an interesting mathematical diversion with no practical use.
The above is not about the practicality of types nor dynamic types. The RunTimeEngineSchema is based on "static typing" because it checks the tables before running the stored instructions.
Hahahahaha. Top, I hate to be rude...
I doubt that, based on the frequency.
but that remark betrays a vast amount of ignorance on the subject. StaticTyping means types are checked - and the program proven typesafe - before the program is run at all.
This is what I proposed.
Checking during execution, but before a given function runs (and then rechecking the next time the function is called, etc.) is dynamic typing.
This is not what I proposed.
[If I misunderstood what you meant, I apologize. Your use of "runtime" in this context threw me; a language runtime normally doesn't get involved until a program written in the language is running.]
Granted, this sort of dynamic typing (checking that a function's actual parameters match a type specification) is different from DuckTyping (where ypechecks are deferred until the last possible instant - which is what Smalltalk does), but it still is DynamicTyping. StaticTyping, by definition, involves no such thing as a "runtime engine schema", because it involves no runtime at all. Static typing means "an expression is proven to have a particular type (or a subtype) in all possible execution paths of the program and for all possible inputs.
It is confined to an expression? Where does this come from? Most compilers convert expressions into a data structure anyhow (such as a tree) before performing such checks. They do not perform it on raw text.
[An "expression" refers to lots of things - any term in the source code, attributes of objects, formal parameters of functions, etc.].
This is why static typing is fast - it has zero runtime overhead. This is also why it is frequently not decidable; it's far too easy to construct programs where the type of an expression depends immediately on a program's inputs. The main objection to StaticTyping comes from the fact that making statically-typed languages fully decidable requires limiting what you can do with objects. You won't find "aClass new" in a statically-typed language, for instance.
I am not debating the merits of dynamic versus static here, but about the merits of definitions.
Further, I don't give a flying pluck what "mainstream computer science" says. The mainstream said COBOL and MS-Windows was grand.
Bullshit. Mainstream computer science never claimed that. Just go and read EwDijkstra manuscripts.READ, for crying out loud, that would be a welcome change for you rather than pretend you can reinvent everything, Top style. -- Costin
Actually, most academic computer scientists of the day thought Cobol was terrible. In my view, it was a usable, serviceable language for its time; though certainly not one that was state of the art. I wouldn't dream of starting a project in Cobol today. But your point is? Just because scientists thought one thing, then modified their thinking in the fact of newer evidence, isn't a weakness of science. It's a strength. -- ScottJohnson
Academic != mainstream. CobolCausesBrainDamage describes how the "academics" tended to get caught up in their theoretical underwear such that business-savvy practitioners were the only ones able to produce something usable before the given deadline. In my opinion, the best tools come from the merging of brainy ideas and results-oriented practitioners who make the ideas workable and marketable. TwoHeadsAreBetterThanOne?.
Typing is a way to ensure that polymorphic dispatching finds one and only one method before methods are actually called, or at least a way to reduce polymorphic ambiguity.
More generally (and more correctly), typing is a way to a) ensure that program operations are well-defined and well-formed, and b) resolve ambiguity. {A little too general.)
SmallTalk is not "typed" because a message may find zero or one method. PredicateDispatching is not typed because it may return zero, one, or many methods (or their equivalent).
- That would be a heretofore unheard of definition of "typed". TypeTheory does not necessarily require that a program fail (either to compile, or to terminate if running) if a type error is encountered. The reason for typing is to avoid UndefinedBehavior. Handling a discovered typing error via DoesNotUnderstand is a perfectly acceptable practice in many problem domains (there are some problem domains, such as mission-critical software, where DoesNotUnderstand would be considered unacceptable; however the issue of how to handle typing errors is outside the scope of TypeTheory.)
- Yes, but we need clarity with regard to what is "undefined behavior". And a "does not understand" is a language-specific thing that may not be in place even if supported. If a language simply did nothing if a method was not found, one can consider its doing nothing "defined". Nothing is fully defined, otherwise we would not need programs to handle input combinations that we don't know yet. In other words, "defined" is continuous, and subject to LaynesLaw problems.
- No, "defined" is a binary term; it refers to the specification of the language or other system in question. A language spec will specify the behavior of a program written in the language; anything not specified by the standard is generally UndefinedBehavior. Language standards frequently identify specific behaviors as undefined. There should be no question if a given operation in a given language is defined or not; otherwise the standard is defective. (See UndefinedBehavior for much more discussion of this). In most cases, a program which exhibits undefined behavior is in error. Examples: some languages (like C) have no way of trapping an invalid type operation (such as a bogus pointer cast) at runtime - the best the standard can do is warn that invalid pointer casts have "undefined behavior". DoesNotUnderstand (in SmalltalkLanguage) is not undefined behavior; what happens when you send a Smalltalk object a message it doesn't like is very well defined - and SmalltalkLanguage doesn't need typing because it has very few UndefinedBehaviors. And you are correct - if a language decreed that unsupported message sends resulted in nothing happening, or an exception being thrown, etc... that also is defined behavior. It may not be useful behavior, but it is defined.
- If it is left to the language, your definition is a UselessTruth. How something handles an "unexpected" extra or missing dispatching result is an implementation detail (and can potentially be user-configurable). Grand definitions should not be tied to such trivial details in a big way.
- It is unclear where you think the 'trivial details' are being tied into this, but understanding what means "UndefinedBehavior" in programming is quite useful to distinguish programs into three classes: those provably with undefined behavior, those provably without, and those that are not yet decided either way. And "undefined" is different than "implementation defined". In old versions of GCC, use of unrecognized #pragma directives - which are "implementation defined" - might fire up a game of NetHack. Undefined behavior is a problem because many expressions for a program won't be decidable either way... esp. those that depend on runtime inputs. I.e. it may be 'undefined' what happens when you dereference an invalid pointer or index outside the range of an array. Exactly what does happen - anything from hard crashes to exceptions to a game of NetHack to a subtle security hole that a hacker will notice years down the line - is free to vary from case to case even within the implementation, as opposed to being defined once per implementation.
The "at least a way to reduce polymorphic ambiguity" was added to the definition because
under strong dynamic typing, types are still used to help decide how to dispatch. For example:
a = b + c
In some languages "+" may also mean string concatenation, depending on the "types" of the two operands. It is usually implemented as a "type flag" that comes with each variable (see above). (Generally I prefer that a language have a different operator for concatenation, such as "&" or ".". But, this topic is about definitions, not ideal language design.)
The biggest problem with this definition is that it depends on the definition of "polymorphism", which itself may be subject to interpretation. Also, sometimes polymorphism may not be involved. For example, the above may have an implementation or internal equivalent that resembles:
a = b.plus(c)
Or
a = plus(b, c)
If a language does not support
ParametricPolymorphism, it may have to use IF or case statements, something like this:
method plus(operand2) {
if (this.operand1.typeFlag=='number' And
operand2.typeFlag=='number') {
return mathAdd(this.operand1, operand2);
} else {
return stringConcat(this.operand1, operand2);
}
}
Or
function plus(operand1, operand2) {
if (operand1.typeFlag=='number' And
operand2.typeFlag=='number') {
return mathAdd(operand1, operand2);
} else {
return stringConcat(operand1, operand2);
}
}
This is not polymorphism in the strict sense. One could also emulate the same thing by creating an array in which one element is called a type-flag of sorts and the other is the value. Some languages just have built-in "arrays" for each variable, which contain the "hidden" type flag and the variable value to simplify the syntax and provide a consistent convention. Such languages essentially are performing the following shortcut:
foo = 7; // shortcut in dynamic strong typing
// the long way
foo['value'] = 7;
foo['typeFlag'] = typeCodeOfConstant(7);
I see nothing that really distinguishes the two other than being built-in versus hand-rolled. Thus, the definition either needs work, or "types" is inherently a vague concept.
- You forgot to plug TableOrientedProgramming, apparently... no mention of querying a database for this stuff. :) At any rate, how runtime type machinery is constructed is an implementation detail, one that TypeTheory abstracts out. An important one for language designers to consider - the big block of 'if's is generally considered a bad idea, due to it having average-case O(n) performance - but an implementation detail.
- PredicateDispatching was mentioned above. Also, we need clarity to differentiate between "dispatching" and "look-up", if such is even possible. All dispatching could be seen as PredicateDispatching of some sort or the other. Sometimes it just takes place at compile time and an error is raised if anything but one answer is returned. This is generally the case for static typing.
- Again, implementation details. PredicateDispatch is the most general form of dispatch; all other forms can be ReFactored? into predicate dispatch. Predicate dispatch suffers from several problems:
- It's slow - O(n) for single dispatch; up to O(n^k) for k-ary dispatch.
- You said that already. This is not about performance.
- It doesn't conveniently allow the dispatch criteria to be specified in separate locations; one of the strengths of OO dispatch is that one can override a base class method without touching the base class definition. With predicate dispatch, one would have to add clauses to the big switch statement.
- "Locations" is a useless term in the virtual world of bits and databases. See GroupRelatedInformation. Now we are LaynesLawing over "location", eh? Polymorphism is simply a language-supported tree search algorithm. Making it built into the language does not change that underlying fact.
- It makes StaticTyping undecidable.
- Constraints can perhaps be added to ensure one and only one answer, if that is what you want.
Related:
SetsAndPolymorphism
I think the underlying reason for the differences in opinion is that I view code as simply a view on an internal (or semi-internal) data structure and things like polymorphism and inheritance are simply code lookup (dispatching) algorithms or rules, usually based on tree-search algorithms. Strong typed languages simply do tree searches to validate "types" and "subtypes". Some do it at compile time, and some do it during run-time, but it is all just a tree search behind the scenes. This view is not necessarily "wrong", but rather a perhaps minority viewpoint. But, it can model anything syntax can for most languages. Most of the formal or "accepted" type theory seems to view syntax as syntax, not as a data structure(s), and thus a data-ist finds the definition foreign. Whether the formal theory can be translated to a data-ist viewpoint, I don't know. Does TypeTheory suffer from LeftHandedSyndrome? Is it my fault for not converting my head to proverbial right-handed?
As an example of its use, I did not "get" recursion until I started viewing code as a data structure. It clicked when I was finally able to mentally picture the variable stack being filled up and view how the code was separate from the stack. The code is a static structure during the recursion process and each "instance" of the subroutine call was an empty template of the variables copied onto the stack to be filled in. It works. It may be a UsefulLie, but a very, very useful one to me. If I tried to explain recursion to somebody else, drawing the stack and copying the instance template onto the stack for every new call is the only direct way I know how to convey the concept. I have read other explanations, but they seem very indirect and obscure, like a politician beating around the bush to avoid being specific. // Oddly, I would never have grokked recursion with that visualization. I always thought of it (in imperative languages) as just "do the same thing". E.g. to walk to the store, "take one step toward the store. Now walk the rest of the way to the store". Or for trees: "Put this together by putting the pieces together. Put the pieces together by putting their pieces together". We think recursively all the time, without the benefit of diagrams of boxes and stacks.
I find it tough to explain things in terms of syntax, but data structures are just a matter of look-ups and filling in the right cell at the right time. And, you can draw them with grids, boxes, arrows, and numbered steps.
In this view, all the fuss about types seems to boil down to guaranteeing one and only one behavioral "cell" is found per dispatch. The one-and-only-one rule seems to be the common thread to separating "typeness" from things not normally considered types; and the primary way of doing this is the "hidden flag" (above). The hidden flag is in both static and dynamic systems. Static or compiled systems just toss it at an earlier stage. Maybe Reynold's approach is just another way to say essentially the same thing, but without reference to data structures. Even multiple inheritance (MI) with its complex lookup process still produces one and only one answer in the end. If it did not, I doubt it would be called "types". (MI's "flags" are sets or lists instead of a single value.) Tying Reynold's definition to the data-structure viewpoint may make it more useful and informative because it provides another perspective. The best teaching methods use or try multiple perspectives.
-- top
The fuss about types is about specification of the domains of data - where it belongs, where it doesn't. It's about program correctness. Whether it boils down to early ("one behavioral cell" in top-speak) or late binding is not material to type discipline (one need only look at C++ to see that static typing != early binding)
Are you suggest that the less problems that can be checked/prevented at compile time, the less "typish" it is? C++ sacrifices certain kinds of checks or error complaints for speed and flexibility. You keep seeming to suggest that types are ALL about compile-time prevention of problems. Does this mean that ANY problems detected at compile-time are about "types"? If typing is just one kind of checking, what distinguishes it from other types of compile-time checking? Take the SQL example under WhatAreTypes. What if the compiler (or whatever its called in the DB) detects a bad table alias. It prevented a problem before running. So does that alone make it part of "typing"? If not, what is the distinguishing factor? Your definition is not providing clear answers to these kinds of questions. -- top
I'm not even sure what the question is. Is it "define 'type'?" Try a dictionary. Programming languages don't agree on a lot of things, let alone something as abstract as the theory of types. Is it "demonstrate the utility of type disciplines X, Y, and/or Z"?. Okay, plenty opportunity for a PissingMatch there, most of those have their own FooVsBar? pages (e.g. StaticVsDynamicTyping) for that purpose.
CostinCozianu is suggesting that there is a single "proper" definition of types, and created this page to separate the proper one from the "unofficial" ones.
Here is my classification of most typing systems based on the "flag" definition (see above PageAnchor Types_As_Flags). It is not meant to replace other classification systems, but rather to only present the "flag-centric viewpoint".
- Compiled typing - Types are checked at compile time to make sure matching type definitions and signatures can be found that match the typing flags. Validating types, such as operand-operator matching, is generally done via a tree or directed graph search.
- This is what most of us call static typing; with the further distinction between systems that can infer types at compile-type (TypeInference) and those which require the programmer to provide explicit type declarations (aka ManifestTyping)
- Run-time strong typing - Types are checked during run-time based on the variable flags. Checking may also be via a pedigree tree/graph search (above), but done at run-time instead.
- Context typing - There are no hidden or side "flags" to indicate type. See TypelessVsDynamic.
- This can be one of two things. If the compiler proves that all uses of a given object or variable are correct, then we have StaticTyping with TypeErasure - this is what many old procedural languages like PascalLanguage (ignoring the variant records hole) and modern FunctionalProgrammingLanguages like HaskellLanguage and MlLanguage provide. Pascal uses ManifestTyping, the latter use TypeInference. Both are typesafe, and both do not save any type "flags" in the object file. Or, we could have languages like CeeLanguage, which provides minimal type safety (which does typechecking, but allows many type-unsafe operations, often with no warning), or its predecessor BeeLanguage, which provides none at all) - the programmer must write type-correct code, with little or no verification from the compiler, and none from the runtime system. Type errors in these languages often produce UndefinedBehavior. Outside of certain problem domains (such as systems programming) where such languages (mainly C) are entrenched, this latter situation is widely ConsideredHarmful. Of course, in defense of such systems, a really smart ThreeStarProgrammer can often "prove" to himself that a program is type-correct in situations that modern TypeInference algorithms cannot. In defense of TypeTheory, such programmers are wrong a good portion of the time.
- This sounds like part of the classic dynamic versus static typing HolyWar. The HolyWar topic contains relevant topics for those interested in value judgements on these options.
- In such systems, one perhaps could say that "types are in the mind". What a snippet of memory or list of bytes means is purely up to the developer. Contrast this with flag-based systems which assign or dispatch behavior (in part at least) based on the type flag, either at compile time or run time or both.
- Types are in the mind. Drop the thing on the car - possibly not typesafe. Drop the washcloth on the car - typesafe. Drop the boulder on the car - not typesafe. Types are abstract semantic categorization. TypeTheory is an attempt at the formal definition of types as we currently see them in computer systems. So there are two correct answers to the question WhatAreTypes - one for those who pursue mathematics and apply it to computing, and one for those who build (usually commercial) applications. Neither serves the other very well. -- PeterLynch
- Mixed - For various reasons, such as having arrays that can contain multiple types, or for hardware speed, the approaches are sometimes mixed.
Of course, these leave out all kinds of language-specific details. Note that the "flag" may also be a list, especially in cases of multiple inheritance. Of course, this makes a pedigree check more resource-intensive. Non-trees also risk circular definitions, which the compiler/interpreter must somehow intercept, otherwise risk an infinite loop.
The "flag model" offers proper type substitutability using tree-search/traversal (or graph-search if MI) algorithms commonly described in the body of published data structure knowledge. For example, if type A is defined in terms of type B, and type B is defined in terms of type C, then the tree search will eventually find the definitions in C as it looks up through the type hierarchy. For MI there may be circular definitions. The algorithms for detecting cycles in graphs are also well-established. Signature-based type matching/validation such as Java interfaces or parameter overloading are similarly merely data structure traversals to find a matching signature (or report a "no-find"). Thus, the flag model does not lack published theory, but merely borrows from it (data structure theory). Or, perhaps one could say it piggy-backs on it. Yes, it is kind of a mechanical definition, but that also makes it less vague than some of its competitors.
Nor is the flag model tied to "syntax", unlike competitors. Syntax can be viewed as merely a human-friendly wrapper around a data-structure(s). Most compilers and interpreters convert syntax into internal data structures of one kind or another anyhow in order to better process the information. It is possible to program by putting info directly into data structures (such as an AbstractSyntaxTree)), bypassing syntax altogether. And the usual type checking can still be performed despite lack of a syntax coat.
-- top
- You don't know what you're talking about. The AbstractSyntaxTree is syntax, and it's the part of the syntax John Reynolds is talking about, and type theory is talking about. All papers and books type theory present type systems by describing the typing rules over patterns in AbstractSyntaxTree. The simplest example: if this pattern (\x:A.e:B) then the type is (A -> B) - meaning the node is a lambda expression taking a variable x of type A to the expression e of type B then the type (A -> B) is assigned to the AST node, please note that x and e are stand-ins for the purpose of pattern matching and not literals. If you could suffer a little bit of reading, you'd be sparing us this nonsense. CriticsShouldBeInformed
- You have not made it clear what makes a tree or structure full of operators and operands "syntax", but other kinds of trees not. (ControlTables somewhat resemble them.) You suggested a "common sense" approach to determining what syntax is, but that is not precise enough. We need a better definition. Further, Reynold's definition being useful does not prevent other definitions from also being useful. You are setting up a FalseDichotomy. If you want to criticize the flag model, then criticize it on its own demerits, not because it is not by Mr. Reynolds. Programs can be viewed as syntax, but they can also be viewed as data structures. They are generally interchangeable views of the same thing. I am simply trying to focus on the second and the flag model so far seems the best bet.
- It's not that "we" need [a better definition], it's that you need to learn the prerequisites for being able to participate to a valid conversation, TopMind. Like I said above: you do not know what you are talking about. You're wasting my time, and other people's time: it's like trying to explain electro-magnetic fields to a kid who doesn't know math and thinks his intuition is better than decades of science. Trying to convince you of anything gives undeserved legitimacy to your trolls. It creates the appearance of a "controversy" where there's none. So I'm done conversing technical issues with you, at least until better times. Don't worry, WikiReaders with minimal education in ComputerScience are able to discern through the obtuseness of your "arguments". And if you pick up some fans who are gullible enough to even suffer reading your nonsense, power be to you. On the other, hand I have to acknowledge that for all your trolls, you have been a great inspiration to WikiChangeProposal. -- CostinCozianu
- This topic is about "alternatives". My "side flag" definition is simple yet powerful. You are welcome to try to poke holes in it. It would be a welcome respite from your usual personal attacks. Go ahead and keep using String Theory to describe how levers work; I found Type Newton instead. Your fav definition may be wonderful for other reasons, but it does not appear one can use it to easily explain how and why types work (or don't work), how substitutability happens, how compilers and interpreters know what type something is, etc. If a newbie asked such questions, Flag Theory can deliver using regular box-and-arrow diagrams and some basic knowledge of data structures. Types used to confuse me until I developed "Flag Theory". It is a mental model that works for us front-line grunt software soldiers. Whether it stands up to academic rigor my be irrelevant to all but the most pedantic or narrow specialists. Your fav theory seems to result in an immediate BookStop every single time somebody asks a question. Flag Theory gets one 90% of the way with only 10% of the mental work as yours. -- top
- Top, please find somebody else to sing your praises, because it's embarrassing. At the very least, you should consider a deal with RK: you sing his praises, he sings yours and neither looks like a fool.
- A typical response from those who rely solely on ArgumentFromAuthority: it's all about "reputation". You could have tried to expose some flaws in flag theory, instead you present insults. If you don't care about flag theory, then simply ignore it. Surely your vast understanding and appreciation of Reynold's theory would give you plenty of weapons to objectively slaughter flag theory. Why hide your candle under a bushel? -- top
- I have better things to do than wasting my time. There's no theory that you presented. Maybe when you'll make your case - though you hardly even tried - I may bother to help you with it (but don't take it as a promise).
- I would describe "flag-theory" as a model and definition, not necessarily a theory. What it doesn't do is provide reasons and philosophy about why to use types. I think that would be orthogonal to definitions anyhow, although perhaps could be explained in terms of a given definition of one wanted to. For example, flag-theory may explain the mechanics of type substitutability, but not why one would want to do that. It is a "mechanical" model of types. Either way, why not present situations that Reynolds theory can handle that the flag model cannot? Do you really need a BookStop to present to the reader situations where the flag model fails? This will come across as weak. Use logic to expose me as the fraud you claim I am, not personal insults. You have not yet earned the right to insult me.
- As for "insults" it's just a matter of objective reality: you and Richard are the two major wiki figures constantly singing your own praises. It'd be a welcome change if you could find somebody else to share with us how great he thinks "Top's type [flag] theory" is. But if you find nothing wrong with the situation, you can continue to sing your own praises. But don't delude yourself: self-aggrandizing is not the same as anti-authoritarian.
- Whatever. Ironically, most don't seem to give a flip about Reynolds definition either.
There's also ProofCarryingCode which includes the compiled code and its types. These are particularly useful for typesafe dynamic loading, runtime typesafe upgrading, useful stuff like that. -- ShaeErisson
PageAnchor tag_dispute_02
(Re: "side tag" or "side flag")
A type could be seen as a hidden or visible sticker that tells you what type something is (i.e. on an apple you can have a sticker that labels the type of apple it is.) Note that this is a StrawMan definition of types since it isn't even a definition! Just like side flags, it doesn't define what types are, it only tells you that a sticker can label something. The sticker tells you the type of apple it is; the definition of type remains undefined. It's a tautology definition that goes around in circles to DodgeTheIssue, at best describing a way to label what type something is at a particular point in time (put a sticker on the apple, put a flag/tag on a value or variable).
It's not perfect, but neither are the other candidates. Those that say what types are, are not clear on what they are not. A definition that doesn't offer a way to falsify its application (what is not an X) is faulty.
{Top, the definition I provided, "a type is a set of values and 0 or more associated operations", clearly offers ways to falsify its application. You just don't like it because it isn't "type tags".}
"Operators" and "values" are not clear-cut. I've shown that "types" can be used as values and vice verse. And whether "types" need closely-associated operators, or even how to measure such association tightness was never firmly established. "I know it when I see it and that's good enough" is not good enough for times when rigor is needed. Do we really need to reinvent those debates here? See TopsTypeDeterminatorChallenge and DedicatedOperatorMeaningless.
{You've never shown that types can be used as values. Your example only showed that a string value associated with a type -- a type name -- can be compared to other strings. Furthermore, using types as values (which can be done) doesn't damage or alter the definition at all. It still holds true. There's nothing that precludes the "set of values" in "a type is a set of values and 0 or more associated operations" from being a set of type values. Furthermore, there is no need to measure closely-associated vs loosely-associated. Associated is associated!}
It's fairly trivial to conjure up a version that doesn't depend on strings. String was just one of many implementation or language-specific approaches among many possibilities. And, are you sure you want to allow ANY association? Still, no clear-cut rules have been established for assigning with 100% reliability the "parts of speech" of languages (values, types, operators, etc.) We all have notions and pattern recognition in our heads that have a rough commonality, but that's not good enough for tight debates, and leaves a lot of fuzzy areas where two different people will label them differently. One can define a language with "stem objects" than can act like common types, values, operators, etc. and switch between them willy nilly. Pattern recognition may be "good enough" to label C/Algol-like languages, but that may not extrapolate to other languages. Thus, pattern recognition (it's an X in Z because it "looks like" the X in Y) is not sufficient.
- {The definition -- like almost all definitions -- is an abstraction rather than a laundry-list of every conceivable instance. I'd be interested to see your example of a language with "stem objects" that can "act like common types, values, operators, etc. and switch between them willy nilly."}
Simple question: do different types of fruit exist? If so, you know what types are and should be able to grasp types as a
LayMan. Types in computer programming are a bit more specific because the digital world is different than the physical world, but anyone should be able to grasp types by simply looking at different types of fruit.
We could create pages like TableDeterminatorChallenge? to debate whether or not tables really exist and how to prove whether ThereAreNoTables. Why waste time on such silly debates?
I wouldn't. EXCEPT that some here INSIST their def is the OneTrue? definition and are rude to others for not agreeing with them and spray their certainty all over the wiki like a self-elected Truth Nazi. Contrast these two:
- 1. "I prefer Dr. Foo's definition of types. It better fits how I personally envision types."
- 2. "Dr. Foo's definition is clearly and objectively better and if you don't agree you are a stupid troll and a stubborn student who should be spanked and banned from this wiki."
Here is something I am writing about types. fwiw. http://sites.google.com/site/argnosis/types
-- PeterLynch
Early definitions:
- n.
- late 15c.,
- "symbol, emblem," from Latin typus
- "figure, image, form, kind," from Greek typos
- "dent, impression, mark, figure, original form," from root of typte in
- "to strike, beat," from PIE root *(s)teu-
- "to strike, cut, hew" (see steep (adj.)).
See also: LaynesLaw, AreTypesTiedToTrees
CategoryLanguageTyping