Can the absence of instances of "types" be objectively detected?
Oh look! Another ThreadMess!
Please, please can someone take the heat out of this and find some content, or delete the content-free bits? Does the page TypesAreTypes help at all?
[Ask clarifying questions and/or help consolidate them rather than complain about it. ThreadMess is a symptom of a need for improved communication.]
Page is now defunct. Top has properly admitted that random white noise cannot internally constitute meaningful structure or types, so now he has a sufficient example of 'foo is not a type' in the context of programming or computer science.. This is after, of course, he proved remarkably dense at comprehending that cats and dogs and 'red' are also not types in the context of computer science, but you can't expect much from someone who proudly professes belief in contradictions like 'EverythingIsRelative'.
- So why was this page created when it could have been thrown in some other page? Who is this wiki puppy that created this page?
- It is NOT finished because there is no objective test for the existence or absense of "types" yet. The existence of such a test would be independent of my allegedly "not understanding" types (a cop-out accusation from the vague-minded). Objective rules don't think, they just exist. I asked a simple question, but get an evasive answer. --top
Congratulations on your hollow victory in which a random blob is the only thing without "types". A nice UselessTruth as been created. (And your "red" comment is misleading). If one took a listing of ANY programming language and asked you to circle the "types", everything would be circled. In the meanwhile, a practical applicable definition of "types" is STILL as elusive as a unicorn. See ya at the Bigfoot Convention, and don't forget to try the pinecone-banana smoothie, a BF favorate. It has the texture of a random blob. By the way, "random" may not be fully testable either. From a practical standpoint, random just means a pattern has YET to be found. It is thus tied to the observer. --top
- If you want to circle which is a type and which is not a type in your text editor, you have to know the distinction between a variable, a value, and a type. There are also operators. Circle the equal sign ( "==" in C). Is the equality sign a type definition or a type of variable? Is the number 7 a type? Circle it. The equal sign and the number 7 are not types. The only way a HostileStudent could not understand this, is if he lacks education on what variables, values, and types are. You could make a programming language where the equal sign is a typedef or the equal sign is actually the addition operator and not the equality operator. That's because programming is an artificial and virtual world. Maybe the problem is that Top has difficulty dealing with the artificial or virtual world that is a computer, because anything can be done with a computer since it is not a real world. Therefore in a computer, to Top, nothing can ever be falsified. Another example is a video game: when you press the up arrow key, you expect the character in your game to move up. You could reconfigure the keys so that the up arrow moves the character down. Therefore we cannot falsify the up arrow, according to top, because the up arrow can be reconfigured to things it shouldn't be, since the computer has this power. Therefore people like Top would argue that the "up arrow" is not actually an up arrow. The up arrow is whatever we want it to be and therefore no one can use the Up Arrow in conversation any more since there is no up arrow (ThereAreNoUpArrows?, ThereAreNoTypes). Since Top's video game character moved downwards with his up arrow mapped that way, he would say therefore you can't falsify the up arrow, since the up arrow can be any key we remap it to. It is a childish way of muddying the waters, and we aren't getting anywhere. What we really need is for people to have a clear distinction between types, operators, variables, and values. Just like an Orange is distinct from an Apple, a variable is distinct from a type, a value is distinct from a variable, etc.
- You are rambling; I don't know what you are talking about with regard to video game. Use quality text, not quantity. The programming language element examples you mentioned are trivial ones that are not the controversial ones. Nobody is disagreeing with those; you built a straw-man. But for the tougher cases we need a definition that is objective and clear-cut. "I know types when I see them" is not good enough. -t
Now your idiocy shines through once again, top. Excuse me while I bask in your ignorance and your seeming confusion that 'admitted thing' = 'only thing', and your apparent inability to comprehend that broadly applicable definitions aren't generally impractical.
Some things that aren't types are (a) anything that is not a specification or type-descriptor, such as cats and dogs and the color red (and I'm talking about the real things, not the words 'cat', 'dog', and 'red'). This is NOT misleading; it's simply true, and rather trivial, fact. You can't blame me for fools that make a habit of balking at simple truths. (b) any specification that is self-contradictory (e.g. the set of all integers between 0 and 1 exclusive) is not a specification for anything at all and is therefore not a type. (c) (for communication types) any structured communication where the greater structure is accidental (i.e. where you ask for a string and I give you well-formed XML), the greater structure fails to qualify for 'type' in the communications context because you can't use it as you could typed communications (e.g. no assumptions, no security, etc.); 'random white noise' is a clear and very obvious case of this, since it makes it obvious (even to top) that any greater structure is accidental. All these things follow from a rather 'broad' definition of 'type' in the context of communications (which is a supercategory including all instances of language, including all instances of programming language).
Maybe, someday, you'll learn that all simple definitions are very broadly applicable, often difficult to comprehend in one go, AND no less practical for it: mass, energy, information, number, percept, proposition, predicate, pattern, monad, relation, type. While you can gain some advantage of speaking of specific forms of these things (e.g. qualified patterns, binary relations, or 'variable' types), you'd be a fool to mistake it for the only possibility. Of course, being a fool would hardly be a new precedent for HostileStudents who, like top, very consistently have an agenda other than comprehension.
I didn't ask to be your gaddam student. I asked for a clear, falsifiable definition. It was a specific request. If I wanted a fscking teacher, I'd find somebody far more pleasant.
FIRST, you asked to be a student the moment you asked someone else to provide you an answer. It just so happens you asked me. So, YES, you did ask to be my gaddam student. If you don't want to be a student, then find your own answers, you prick. And you'd be a fscking HostileStudent no matter who provided an answer.
Also, I suggest we stick with programming languages and apps instead of the "real world" to simplify the debate.
Irrelevant. Specific conditions can only make things more complex, not less so. If you want to 'simplify the debate', you take conditions and requirements away; you certainly don't add them. Attempting to limit the debate to 'programming languages and apps' first requires defining 'programming languages and apps' which I believe to be another long and largely pointless venture in the quest to reach a falsifiable definition of types. Like all simple concepts (e.g. relation, number, monad), it is better (simpler) to define types generally then explore how such a definition applies in the typical context of communications or programming.
(Moved from OpenProblemsInComputerScience)
Without a clear and falsifiable definition of "types" ("foo is not a type"), this discussion appears to be on the express train to LaynesLaw. --top
''The number one (1) is not a type, it is a value that corresponds to a type. There is your falsifiable definition. If that type specification happens to allow the number one in, then good. If not, then run time error, warning, compile time issue. The color "red" is not a "car type". Another falsifiable definition. If red is not just a color but also a style of a car (luxury defined as red), then this can be a type of car. Clear meaningful types would not include such silliness, and red would only be a type of color.
One can model car colors as types/subtypes if they really want to. Any combo of attributes can be turned into a subtype tree (which is not the only form of types). Or for that matter, integer instances. That's the point: "can" versus "should". Your "definition" is a TautologyMachine. And, howabouts we move this debate to another spot. --top
I said to use clear meaningful types. carColor := red is not a "car" but a "carColor", TopMind. TCarColor is much more specific than TCar. TCar does NOT equal TCarColor. TCarColor may be how you (confusingly) define what a car is.. but it is just the Color Of The Car, hence TCarColor, not TCar. As for the definition of a type: do some reading on Hugh Darwen and Date: http://www.dbdebunk.com/page/page/1812707.htm Types are very specific, and involve precision. Date/Darwen go as far as wanting to constrain types to the most precise way possible (not allowing Quantity to be added to Weight). In fact, their articles remind me very much of what I described in ConstraintType. And that is exactly what it is all about.. a type that constrains so that we have integrity (as opposed to it being about saving developer time with sloppiness).
"Clear and meaningful" is subjective. carType := redCar "compiles". If this is changing from a debate on what is and isn't types into "good types" versus "bad types", then you are both changing and complicating an already complicated topic. --top
- carType:= redCar is repetitive and redundant... you violate NeedlessRepetition because you declare the word Car twice. That is not very precise and is just like doing namespace_function_namespace(namespace, blah).
- What do you mean "compiles"? Don't tell us Top uses a "compiler"?
- But according to a HostileStudent, the word compiler is not falsifiable. What is a compiler? everything is a compiler. Circle things on your screen. MS Word is a text document compiler. An orange and apple is a compiler because oranges are compiled fruits, therefore fruits are compilers. Not falsifiable, sorry.
Top, are you really indicating that you honestly believe 'red' (the color) and 'redCar' (a classification of vehicles based on their paint job) are exactly the same thing? Or are you just twisting words and playing the pretend idiot to avoid confronting actual argument?
I showed that your statement, 'The color "red" is not a "car type".' can be wrong. If I misinterpreted your statement, it was not intentional. No need to accuse me of spiking the punch. I would suggest rewriting it if it was ambiguous. But, we are drifting from the "falsify" issue: what is objectively NOT a type. You still have not produced something that cannot be interpreted as a "type". You can make up your own little car model and within that model state what is a type and what is not a type based on your own rules and vocab, but that is simply making your own world with its own rules and own vocab saying what is a type in your model and what is not a type in your model. In short, cheating via a WalledGarden HumptyDumpty. It is possible to create a subtype tree of cars based on color, as shown below. But you proved yourself wrong anyhow by saying that "red" is a type "color". That is not falsifying the existence of types in something. It is existence that matters here, not alleged misapplication. --top
No, you have shown no such thing, Top. What you have shown is that, no matter what Lars produces, you can twist it in your mind to the point you can misinterpret it as a type, but unless you establish that whether something is a 'type' is subject to your interpretation, your argument holds little weight. You can't take 'red' and drive it around... at least not without a much more fantastic imagination than I possess. I won't pretend Lars has a much better grasp on types than you do; IMO, you're both uneducated ignorami when it comes to types.
- Re: "what Lars produces, you can twist it in your mind to the point you can misinterpret it as a type." [emph. added] How do you know it is MIS-interpreted rather than one of many possible interpretations? Where is the clear-cut test? If being a "type" is something objective, then there should be an objective test. If "types are in the mind", then seeing something as a type one day but not the next is valid. --top
Irrelevant. The car thing has been misinterpreted. The open issue is that nobody can so far prove that something is or is not a type outside of a custom world with custom vocabulary ("types are whatever the hell I say they are in my model"). Until that's done, my accusation that "types" is not falsifiable and thus not scientific but subjective stands. If you are as smart as your name-calling of others suggest, then barf forth clarity and science and proofs.
- Did you respond yet to the number 1? The value "number 1" is not a type, but if it is within a RANGE that specifies 1 as a valid value, then it is a valid value that conforms to a type. Therefore, I have falsified - I have declared what a type is not. A type is not the value such as number 1. Human clothing is also not a type of number. Is it really that hard for you to understand? In a video game, clothing could be artificially magically represented as numbers.. but that is a virtual fake world. Define some limitations and you will see that a pair of pants is not the number one.. therefore a pair of pants is not of the type "integer" in the real world. A pair of pants is a type of clothing in the real world. Really, types are a form of common sense. In the world of computers, we can abuse types like you did with the carType:= redCar just as in real life one could abusively declare pants and t shirts as tShirtIntegers.. when in fact they are clothes. Just use some common sense and avoid NeedlessRepetition.
- Along the same lines, you cannot tell me what a number is! You cannot falsify a number! How can you tell me that a number exists at all, if we do not know what the end infinite number is? Therefore you cannot claim numbers exist! We can go on and on with arguing about such nonsense.. let's just stick with the dictionary definition of a type and the dictionary definition of a number. In fact, you cannot claim that numbers exist.. you are not God. You cannot prove that one exists. You can only claim. Math is fake! It doesn't exist! Stop all math courses!
- A number is a UsefulLie created by humans to approximately model the world.
- A string is a number! You cannot falisfy it! A string is a number! A tshirt is a number! A telephone is a number! It is not part of a domain or type system, a string is a number!
- Nonononono, that isn't a number, its actually a button. Everything is a button.
- Nononononoononono.. it is actually a staple gun. Everything is a staple gun. You cannot claim otherwise, you have no proof.
- No, it is all bits. And hot dogs. Not types.. just bits and hot dogs.
- [There are no hot dogs! I prefer "hot dogless" programming. Nobody can so far prove that something is or is not a hot dog outside of a custom world with custom vocabulary. Hot dogs are a pickle with a sausage flag.]
- Is there a point to this phony simulated discussion?
- If I look up the word "apple" in the dictionary, I can declare that the apple fruit itself is not an orange.. but who can prove this? Who has proof? No one. Apple fruit is a type of car. In a sensible program, it is not. Just as in a sensible program, one plus one does not equal seven. But it can if you really really want it to. In a computer program, all math can be proven wrong if you really really want to do that.. since you are dealing with a virtual made up world. Did I mention common sense?
The car thing was dreadfully misinterpreted by you, Top. Your claim has essentially been that (('red car' is a 'car type') implies ('red' is a 'car type')), which is plainly false. Further, you've proven too dense to recognize the fallacy in your argument; a more honest, more introspective person might start asking: "If I got that wrong, what else might I have gotten wrong?" But not you. You're neither introspective nor particularly interested in learning about your own imperfections. However, your whole accusation - the whole premise you created in your mind for this page - is in error.
There seems to be some confusion about types existing in a model versus types existing in external reality. But without a solid consensus definition, I cannot clean up this example (unless I make a model up out of the blue). I cannot turn a sow's ear into a silk purse. I cannot fight fuzz with fuzz (unless the end product is fuzz). I cannot be objectively wrong about something that is not even concrete. Slam me with REAL logic, not your pet notion-math. --top
- You've already proven that you answer REAL logic with REAL fallacy. ONLY if you honestly and fully believed that Lars meant 'red car' every place he said 'red' (DESPITE him having said the opposite a multitude of times, so I don't believe you honestly do) THEN you could claim it was an issue of 'fuzz'. But I don't believe this to be the case. There was no 'sows ear'. Lars made it abundantly clear that he meant 'red' and not 'red car'. And you were objectively wrong. Period.
- And solid consensus as to definition isn't necessary, as explained earlier (below). BUT, here is something you should (as a person who has argued that ObjectivityIsAnIllusion) already understand: There are no 'red cars' in reality, and no 'cars' in reality, and no 'red' in reality - those things only exist in our heads; even if reality is 100% objective, then in reality there are only configurations of electrical forces and whatnot... not 'cars'. Types only exist in models. But it should also be trivial to see that types apply over reality - that which one calls a 'car' must have certain properties, have configurations of electrostatic forces to certain specifications or that produce specified behaviors. Any type, in the broadest sense of the word, IS such a specification. That types are specifications or descriptions of things to which the 'type' can be said to apply is an essential property of types - not the only one in the context of programming, but one that will be true of ALL type systems, whether they be formal or not.
I'll tell you this much: 'Type' only needs to be defined, and thus only need to be 'falsifiable', within a given 'type system' or context. Further, 'types' are mathematical and logical entities, not 'scientific'. But I won't waste (more of) my time trying to convince you of anything, no more than you should try to explain the RelationalModel to a seven-year-old who only cares about red cars. It would be a waste of my time and yours - a fact proven across a variety of other type-related subjects.
If types only exist in a model and the model maker gets to label what are types and what are not types within that model, then using the term "types" outside of a *labelled* model will almost surely be an express train to LaynesLaw, as the opening statement suggests.
What you describe constitutes an insufficient condition for driving a discussion to LaynesLaw. Using the term "types" within an implicit context, such as 'Relational' or 'Programming', is only an 'express train to LaynesLaw' if one of the persons involved in the argument is being contrary just for the sake of being contrary (e.g. playing devil's advocate and trying to pick apart terminology rather than confront the actual argument being presented). So long as the discussion itself is confined to examples that members consider to be relevant types for context, there is no problem - even when 'type' isn't clearly defined by the type system, the context itself often makes it clear what can and cannot be logically (mathematically) and pragmatically (in implementation) be utilized as 'types'.
Now, it may be that you consider this an 'almost surely' proposition because YOU will happily take the role of devil's advocate and bastard who fusses about terminology 'rather than' instead of 'in addition to' confronting the actual arguments.
PageAnchor: "routine types"
- I dispute that. I've been in arguments where people say things like "variables are one type and subroutines are another type, therefore what you consider "type-free" actually does have types". The scope and context often *does* become an issue. --top
- The other people are correct. Variables vs. Subroutines constitute two different 'types' in the context of programming. Properly, they make a distinction over parts of the programming language, making the program itself more than just an unstructured 'string of characters'. And what exactly is it you are disputing: I said quite clearly that the 'implicit context' should make things clear. Are you telling me that you're incapable of figuring out this context based on the statements being provided? (Because, to me, this 'dispute' is tantamount to you claiming that you aren't a devil's advocate by virtue of lacking the necessary mental agility to comprehend the context in the first place.) Obviously, in the context being provided, 'type-free' would mean 'no distinction at all between code and data'. And there is a common example for these people to point at: raw machine code. As such, you really can't rationally argue their position or context is unjustified. In fact, I'd say theirs is the ONLY justifiable position: the ONLY code you can truly justify as claiming to be 'type free' is 'raw bits fed to the CPU'. Can you provide reasonable justification to deny this?
- And do note that distinguishing 'variables' from 'routines' is a NECESSARY first step before creating 'variable types' and 'routine types', and allows one to consider the issues of, say, assigning routines to variables. There is nothing 'trivial' about them, and the distinction is of significant practical value. You say below that this 'routine type' scenario has made you 'gun-shy', but you obviously still believe yourself in the right and others in the wrong on this case. Perhaps you should forget everything you think you know about types and study them with an open mind, starting with the true 'type-free' BLOB of memory.
The original claim that triggered all this was that "types objectively make programs better" (paraphrased).
How can you make such a universal statement if the truth of "types" only exists within a given language/model? Either you need a universal model/def of types, or you need to test every language/model to back up that claim. And, "types objectively make programs better" *is* a scientific claim.
First, "types" in the context of "programming" has a fairly clear context, I'd think - it can refer to any type definition for any system ever applied to programming, which necessarily means types applied over a programming language, since every instance of programming is done in a programming language. Second, the claim is obviously not that every possible type-system makes programming better. To be 100% logically correct, the claim that "types objectively makes programs better" can be demonstrated with one example: ONE (arbitrary) property provided by ONE (arbitrary) type system over ONE arbitrary programming language that has clearly makes programs 'better' by ONE objective measure that all parties of an argument agree upon as constituting 'better', and makes it little or no worse by other objective measures. That is sufficient to show that 'types' can make programming 'better'. Your notion that: "Either you need a universal model/def of types, or you need to test every language/model to back up that claim" is, very simply, a rather significant misunderstanding on your part.
- That's making just ONE metric better. Usually there are downsides to what languages typically call "types". One needs to show that the upsides outweigh the downsides. I doubt you can objectively prove that. --top
- making ONE metric better without making the OTHER measures 'worse' . Read what I wrote (which, in this case, is "and makes it little or no worse by other objective measures").
- Claiming does not make truth. Anyhow, I think I see where the confusion is. You seem to be claiming it makes the *language* better, not necessarily an application program. While I agree that having the *option* to use stronger types is a probably a net plus (if it doesn't interfere with non-typing usage), this is NOT the same as proving it makes an application itself better. --top
- Of course claiming does not make truth (unless one is describing truths about the claiming). And you should respond to my claim below my claim (which is described below), not below the reasoning about what constitutes a proper answer to a claim. Anyhow, keep in mind, top, that there is no such thing as a useful stands-completely-alone application (that takes no input and produces no output). As a consequence, all applications must communicate. All communications can either be massive BLOBs (e.g. undifferentiated streams of bits) OR be typed (i.e. have distinction over the bits, structure, semantics, and meaning). Those are the ONLY two logical options. So I'll repeat my claim in this context: typing for applications (their communications infrastructure in particular) makes applications themselves better because it allows one to make use of the outputs and allows one to provide meaningful inputs to the application. Further, it allows one to perform optimizations (better performance for communications, since you can make assumptions about how you will need to handle communications), gain security (since you can enforce properties over communications), and define correctness (since one can make useful distinctions about what constitutes 'incorrect' inputs or outputs). Communications types make the application itself better. This also derives from a fact: Operating Systems and Programming Languages are the same fundamental thing (LanguageIsAnOs).
- Now, you seem to believe that the existence of an 'option' might be better, but that strict enforcement (StrongTyping) is a clear folly. Can you tell me how the *option* to use weaker types (e.g. raw bitstreams) provides ANY tangible benefit whatsoever? Because I don't see how, especially given the option to explicitly represent bitstreams or character strings in a strong typing system. Allowing one to violate types (as an 'option') undoes almost every emergent benefit they provide. So, unless you can demonstrate that the benefits of the 'option' outweigh the costs of the 'option', top, rather than just claiming it, I shall not consider it true. Claiming, after all, does not make truth.
And it is true that '"types objectively make programs better" *is* a scientific claim', but it doesn't require that 'types' be defined scientifically. Your logic would be akin to saying that "strings" need to be defined 'scientifically' (e.g. based upon empirical observation) to make any claims about use of strings in programs. It would be dreadfully silly.
Only if both parties agree what "strings" are up front. One does not have to find a universal definition if there is such an agreement. I used to think that types were the same way, but the likes of the "routine type" scenario above has made me gun-shy.
Both parties only need to agree upon the essential properties of 'strings' (i.e. 'ordered sequence of distinct digital values'), not the issues of zero-termination, maximum lengths, representation, or the exact character set, et. al. Note, thus, that they don't need to decide upon the exact model of 'strings' any more than, to discuss types, they need to agree upon the exact model of types. Further, note that the 'shared properties' do not constitute a 'definition' of string, merely a set of limits over the possible definition. FURTHER, note that these limits are not 'scientific'. Sheesh. An educated ComputerScientist or mathematician would know this stuff. Are you JustaProgrammer?
- How does this contradict anything I've said?
- Agreement upon essential properties does not constitute a definition or model or even full agreement, which is what you imply. E.g. two individuals could disagree vociferously upon other necessary properties of "strings" so long as they agree upon those properties that are 'essential' in the context of the argument. In the context of 'types' as used in 'routine types' and in the 'OpenIssuesInComputerScience?' page, mere agreement upon the concept that 'types in any system will give some structure and meaning to a sequence of bits' is the only essential property to make the conclusions given. You're free to disagree vociferously about everything else about 'type'; so long as you agree with that one, essential property, however, you cannot dismiss the argument on basis of definition issues.
Now, my own claim (in OpenProblemsInComputerScience) was slightly stronger, which is that types ALWAYS are better for programming. I didn't say which or 'all' types; the only universal I applied was over 'all' instances of programming. But I stated, in this context, anything short of treating bits in memory and on HDD as one massive, unorganized, undifferentiated BLOB, as having types. And I made that context very clear - I stated it very directly in OpenProblemsInComputerScience - which even a devil's advocate like yourself can't rationally deny. Having 'files' as distinct from 'the blob' would represent a type. Having 'file formats' as distinct from 'binary files' would also represent types. This context treats 'types' rather liberally, but, if you wish, I can support my view by pointing at some very common programming languages (i.e. raw machine code) that truly do treat memory and files in exactly that way, thus demonstrating that my view is 'practical' and applies to 'real life'. I can even show this definition of 'types' is falsifiable: clearly 'dogs' and 'the color red' aren't 'types' in this context (introducing something of a fundamental concept mismatch - 'dogs' and 'the color red' can't even be applied in the context of differentiating BLOBs - and, top, before you think such an argument is "silly", keep in mind that any definition only needs one thing it applies to (this is a type) and one thing it doesn't apply to (this is not a type) before proving its use as a definition and that it is not a 'TautologyMachine', no matter how trivial). Further, "let's treat everything as one big 'blob'" isn't a 'type' in this context, since that's exactly what types were defined to not be. If you wish to oppose this position, you MUST argue that treating everything on memory and on HDD as one massive and undifferentiated BLOB is no worse
for performance, correctness, security and various other objective measures than application of 'types'. This, you haven't attempted. You'd much rather fuss about terminology than confront such an argument.
And my point was, and is, that once you agree that having at least two types (e.g. executable machine code vs. other data) is 'good', after that it really is just a matter of degree, and any argument that there is a clear 'point' at which more distinctions aren't better had better be provided with with a good bit of logic lest it be a slippery slope argument. I happen to think that making 'the right' distinctions is important, and admit that wrong distinctions can make a system inflexible. But I also happen to think your justifications that 'strings' and 'integers' and 'dates' and whatever selection of types it is you've decided upon happens to be sufficient for all relational programming is woefully inadequate and entirely irrational.
If your definition of "types" is broad enough to include any "atomic unit" one encounters (routines, statements, columns, tables, etc.), then of course I am going to agree that "SOME types are good". But if your definition is indeed this broad, then it cannot be falsified in any practical sense since every language/IT-tool ever invented has at least some kind of atomic unit. You seem to be backing yourself into a corner. It will be entertaining to see how you wiggle out. -- top
If you agree that SOME types are always good then, if you wish demonstrate that the 'option' is even better, you need to explicitly prove that the 'option' to violate these types either (a) ought to always be included, or (b) that there are some specific circumstances under which types ought to become optional. It seems to me that puts the ball in your court, and leaves ME with a great deal of freedom - hardly 'backed into a corner'. I'm curious as to exactly how you twisted your perception to the opposite. And the application of 'type' can be falsified in the practical sense by looking at exactly those situations where 'types' aren't applied - BLOBs and raw bitstreams. One can also look at those locations where type-correctness isn't enforced.
- I don't even agree on the existence of "types" as anything objective, or at least defined consistently enough to talk about beyond vague notions. The burden is on you to show that they are falsifiable. --top
- I've already stated, and provided example, that 'types' are mathematical, so the question of objectivity vs. subjectivity is entirely moot. Further, the notion of 'types' has been demonstrated, many times, to be 'falsifiable' in the sense of possessing cases where you can say: this is not a type. Should I use words with at most one syll- sound in them when I tell you these things?
- You claimed they are mathematical. That doesn't make them mathematical. And it has NOT been objectively demonstrated.
(Plus, I would disagree with such a broad definition, mainly because it waters down the concept beyond anything meaningful in the practice.) -- top
I'm quite certain that nothing is 'watered down' by use of this broad definition. Can you show provide an example of where the definition is meaningfully 'watered down', or is this just another 'claim'? (remember: "Claiming does not make truth -- top").
- It appears the only thing that does not have "types" according to your def is a random white noise without any determinable pattern. If thats the case, then the existence of types is always a given in practice. It would almost be like defining "Foo" as "either matter or energy or the lack of matter or energy." --top
- Not quite, top. It's more like defining 'foo' strictly as energy and matter. Obviously, as you've said yourself, random white noise doesn't qualify, so if you were looking for the ONE example of 'lack of types', you've done me a favor and already provided it. The wide application of 'types' to all useful communications is a good thing, and doesn't 'water down' its definition whatsoever.
Claiming that words in the dictionary exist does not make the dictionary truthful either. Yet people use the dictionary as a reference. Consider a type system as a point of reference for
sensible people. The brave person that declared 1 + 1 = 2 was also just making points of references.. and it is not correct math, because in my world 2 + 2 = 1 (two bananas plus two bananas always equals 1 group of bananas)
Example 1:
- Cars
- Green Cars
- Blue Cars
- Red Cars
- Etc...
No one claimed that there couldn't be red cars... what was claimed was RED on its lonesome, was NOT a car itself.
[But a HostileStudent would then go on to claim that someone started up a new car company called "Red", and that HostileStudent recently saw a car outside his window and it was made by the company Red, therefore Red is a car itself. The problem once again is the twisting of words, twisting of logic, and dodging the real issue. The techniques used by the HostileStudent are very clever, but lead to much confusion. What the HostileStudent needs to understand, once again, is the clear distinction between variable, value, type, and operator. Is the equal sign a variable? or is the equal sign an operator? The HostileStudent could go on to create a programming language where the equal sign is in fact a variable, since computers are just a virtual world where anything is possible. The HostileStudent can actually get caught into the trap of thinking that nothing is falsifiable, because with computers, everything can be remapped to do something else. The equal sign can be overloaded to be something else, therefore equality no longer means equality and equality is not falsifiable, so just like ThereAreNoTypes we create a page called ThereIsNoEquality? (which is silly). This is a clever trick used by the HostileStudent to dodge the real issue and GoAroundInCircles?.]
You are putting words in my mouth. I never said the things you accuse me of. And this is not about "knowing" types when they are seen, this is about creating a "mechanical" objective scientific test for types. It's about rules, not what's in human heads. You can't seem to get that.
I declare that TableOrientedProgramming does not exist. It is a fraud. No one can prove that tables or cells actually exist. Despite what the dictionary says, there are no such things as tables, cells, or top. In fact, humans do not exist either. You cannot prove to me that humans exist. Fraud. This is fraud. This wiki does not exist.. you can't prove to me it does. it is just bits.. and even bits don't exist. Falsifiable? Not useful. --DevilsAdvocate
Being a somewhat vague notion is NOT the same as being useless. "Country music" does not have a clear-cut definition, but that does not stop people from using it to describe a notion among themselves. Somethings they will even argue about the borderline. But it is important for people not to mistake personal notions for universal truths. Further, I don't claim that TableOrientedProgramming is objectively better. It just fits my head better. I'm in the EverythingIsRelative camp. Thats why "mind" is in TopMind. I am not claiming universal truths. Well, except maybe BS is a universal truth. --top
A type is actually not a truth, just as Math itself and numbers are not truths.. that is exactly the whole point of a Type. We are humans and we declare things.. we declare words in the dictionary, we declare structures, we declare that a phone is a phone (even though no one can prove that a phone is a phone, nor can anyone prove that phone is spelled correctly.. unless they first prove the dictionary is right).
What I'm doing top is playing DevilsAdvocate here and I'm trying to trigger your mind.. to see the whole type thing.
Even a boolean is not truly FALSE or TRUE if for example a computer half catches on fire and the bit gets stuck in between.. kind of like a resister. Pretend that your bit switch became an accidental resistor and caught on fire. Is it true or false? Nothing can be defined as TrulyTrue but as humans we have to act sensible and responsible. This is why Math was invented and why science was invented. Even though math and science are not true.
How does this relate to the topic?
Try and falsify math. Try and falsify the number 1. Your obsession with falsifying things doesn't serve any purpose. A type system is just a classification system. Look it up in the dictionary, and see TypesAreNotTypesTheyAreTypes. Try and falsify the dictionary. Is anything TrulyTrue?
Are all classification systems type systems?
Yes, but not all of them would qualify for communications or programming type systems.
Well, let's try narrowing the scope for the sake of argument, then. Is there a clear set of rules to fail "programming type system"? --top
Yes. First you must know the difference between a variable, a type, an operator, and a value. Is an equal sign a type? No, an equal sign is an operator. We just falsified. HostileStudent would then argue that actually an equal sign is a type, and it also is a value. Is the less than or greater than sign an integer? No, the less than and greater than signs are NOT integers. The HostileStudent must understand the basics of programming in order to know what types, variables, operators, and values are. This should actually be taught in math class. Unfortunately in math they just start putting x=1 on the chalk board instead of defining what X is, or declaring it's type or constraint. If math teachers said "X is an integer type" and they had to declare it on the chalk board, the HostileStudent would then understand types.
"I know a type when I see it" is not good enough. Show rules for types, not your guess. I keep stating that and you completely it ignore it, building up strawmen instead.
It is widely known in math that the equal sign is not a number type. Number types are widely known in math:
http://www.purplemath.com/modules/numtypes.htm
Since the equal sign in math is not a number type, we have falsified. Are you going to argue that the equal sign is in fact a number type, and when you write math on paper or on the chalk board, you cannot distinguish between numbers and the equal sign, because they are all the same?
But that's pattern matching of prior encounters, not necessarily a universal truth carved into the universe. For that reason it doesn't scale into the general case, such as domain-specific "types" or making language features be dynamic that you are used to being static. "It's an X because it resembles another thing that was called X" is not rigor.
Resemble? Actually programming includes math inside the programming language itself. Addition, subtraction, multiplication, and number types are included in programming languages. ThereAreTypes?, they do exist.
What if they didn't and you had to implement them via importing libraries? How does one know whether it's a "type" or not?
Material moved from NotEvenWrong on June 2012:
(Quoting had to be re-created, so there may be inadvertent errors.)
Unfalsifiable, Like "types", eh?
Sure, unless you mean "types" in a specific TypeSystem. In which case it is often clear whether something is a type.
- Which means the designer can make up any rules for what's a "type" that they can imagine.
- Only if their imaginations are so weak that they can model everything they can imagine.
- Imagining and rigor-tizing are two different things. I can imagine the existence of a practical FlyingCar, but that doesn't make it exist in the real world.
- Indeed. Further, your imagination doesn't even enable the allegedly practical FlyingCar to exist on paper or demonstrate in simulation.
- Exaggeration. YOU guys claimed that "types" was clear and rigorous. I didn't dismiss it as a general notion. Nor did you produce anything even close to a simulation.
- Are you suggesting that we can simulate anything we can imagine? And what does imagining flying cars or arbitrary "rules for what's a type" have to do with types being clear and rigorous? Perhaps you have misread something.
- Where did that implication come from in my statement? -t
- You said I was exaggerating, just after I said imagination doesn't enable simulation. What else could you mean?
- Huh?
- That was my response to the "Exaggeration" claim, before I wrote something more articulate and intelligent. Perhaps you have misread something.
- [Right now I'm imagining a coin with one side and the sound of one hand clapping.]
- I tried to imagine you guys being civilized and rational, but it caused my head to reboot.
- That says more about your understanding of 'civilized' and 'rational' than it does about us.
- You already think I'm a flaming idiot of the lowest degree. Why should this change anything?
- There is no lowest degree for idiocy. ("Two things are infinite: the universe and human stupidity; and I'm not sure about the universe." ? Albert Einstein)
- Einstein also once said, "Shove it up your black hole where light cannot escape."
- Interesting. Can you provide a reference?
{Why limit it to that? Even for the other definitions of "type", there are clearly things that aren't types. But all of this is just pandering to Top's
RedHerring. Definitions can't be wrong in that way.}
- Definitions can be vague or ambiguous. If non-type things are clear, then give us the damned formula/algorithm for such a label. Why should anyone just take your word for it?
- {Why do you think that well-defined implies decidable? Especially since there are proofs that it doesn't.}
- Top seems to have confused a claim that "there are clearly things that aren't types" with a claim that all "non-type things are clear". This makes me distrust all his claims about what other people have said.
- If I misrepresented someone's opinion, I apologize. Anyhow, can you provide a test that falsifies SOME things that you suggest clearly fail? Baby steps for the notion-to-algorithm-impaired.
- Definitions distinguish, not falsify. But here is one test: the ocean temperature on Thursday evening should not register as a "type" by whatever algorithm you use. Neither should the tool used to measure it.
- ItDepends. For example, suppose a weather agency takes 3 samples a week, but samples different aspects of weather on different days because the equipment calibration time needed doesn't allow for multiple instruments at a time. They may call Sunday's samples "type A", Tuesday's samples "type B", and Thursday's "type C". They may not want to label it by day-of-week because it may shift in the future.
- I think these are very different scenarios. One specific sample vs. collections of samples. One specific piece of equipment vs. generic configurations. Why do you think it serves as a potential counterpoint?
- So types require plurality? I smell a teeny weeny hint of a........rule!
- I have not suggested that types require plurality; I noted that plurality is a big difference in the scenarios. Is there a reason you associated types with plurality in your proposed `ItDepends` counterpoint?
- Why then did you say "One specific sample vs. collections of samples"? It smells like a rule, but you won't commit.
- I was describing differences between your scenario and mine. Why do you confuse "describing differences in scenarios" with "suggesting rules"?
- I'm just asking, trying to elicit specifics by trying to parse out your intent.
Top would deny the "clearly" outside of a local, formal definition. Top is the sort of persona that would squint at an eye exam then argue that the enormous E at the top line of the chart was unclear and fuzzy, just to thumb his nose at the ivory tower that is opthalmology. And, if pressed further, he'd note whether it's E or 3 is relative to whether he's standing upright. Never underestimate the blindness of someone who believes his interests are served by it.
Fortunately, OCR has created detectors which can be tested and cracked open in order to see the rules/algorithms/formulas for what passes or fails as an "E". Reality slams your lame dig.
Slam dunk, indeed. Let me know when you find OCR algorithms and rules that you don't think are fuzzy.
I'm not sure what you mean. Note that not everyone may agree with the validity of the algorithm or rules, but at least it is something that is subject to objective analysis and scientific testing, unlike notions in one's head.
{I'm unaware of any OCR that is infallible. In fact, they can't be since I know people who write "1" (one) in exactly the same way they write "l" (lower-case l) and no OCR is ever going to be able to always get the right one.}
It can guess based on context and historical probability. "Types" can also, but I don't think most will accept historical pattern matching sufficient to define "types". Anyhow, the original point is that your analogy is flawed. A definition/model can be "wrong" because it "doesn't compile" (logically inconsistent), and/or because people don't agree it's a sufficient definition/model to convey the concept. If people cared about having an academic/rigorous definition of "E", then the second may become an issue to THEM. So far they don't. But if somebody came along and CLAIMED a rigorous def/model existed, it may be realistic to expect a "runnable" model be produced, or existing models of such claims, such as OCR software, be subject to both kinds of scrutiny. This all started when you guys claimed such already existed for "types". But when the scrutiny started you started making excuses not produce a runnable model.
{Sorry, but definitions that are inconsistent still "compile", they just don't have any referents. It really, really, really isn't a problem. Failing to capture the intended concept certainly is, but nobody that I know of has ever said otherwise. Models (in the logical sense) simply cannot be inconsistent, and since they don't depend on intent, they can't be wrong in the second way either.}
- Where did I bring up "intent" above?
{Once again, you still haven't grasped the fact that well-defined does not imply decidable. There are numerous examples of definitions that are well-defined that aren't (halting on an input for TM's is a rather fameous one). Asking for an algorithm (which is the only interpretation of "runnable model" I can come up with), is indeed unreasonable in many cases.}
- Again, you claim your head is deciding "types". Thus, if it takes a finite time to run in your head, then the same should hold for computers, barring some unknown force of physics or deities. -t
- Every decision we make takes finite time. But you must also account for every time we fail to make a decision.
- If it succeeds often enough for "typical" things we test it on, that may be good enough as a start. Produce the algorithm for those cases.
{I've made no such claim. That's you, once again, thinking that well-defined implies decidable.}
No I didn't.
{Sure you did, my claim is that "type" is well-defined. You say I'm claiming that my head can determine whether or not something is a type in finite time. I.e. it's decidable. In order for my claim to be equivalent to what you said I claimed, then well-defined must be true if and only if decidable is true. Since it's trivial to show that decidable implies well-defined, this can happen only if well-defined implies decidable.}
You implied it was easy, natural, and common to "identify types", meaning many brains are currently doing "it". Actually, it's difficult to test a definition if it's not decidable (most of the time). How would one otherwise know it matches samples and/or one's "notions"?
It is not difficult. Compare the instances where a decision was made by both parties.
As a "notiony" guess.
Of course. If we're trying to match a definition to one's "notions", it is natural that one of the parties would be making "notiony guesses".
Ideally we want a clear model that multiple parties agree matches their notions of the concept.
{I presume that's the royal "we", because you seem to be the only party whose notions, by in large, seem not to match the rest of ComputerScience and SoftwareEngineering. If you can demonstrate that your views represent some group, maybe it would be otherwise.}
I don't trust that you've done a decent survey. Anyhow, list the notion rules and let's see if we can find a consensus. Example: "Pattern #54: Most types involve constraints". Document it for all to see instead of speculate quietly in your head. -t
{As you are the one making claims against accepted practice, I submit that you are the one who needs to do a decent survey to determine whether or not accepted definitions are acceptable, or your new definitions are needed. Otherwise, with all due respect, why should we pay attention to you?}
What "accepted practice"? There's no standard reference source for such that anybody's pointed out.
I'm sure you've been pointed to standard resources. You probably complained about BookStop. But, in the unlikely case you're more interested in learning than you are in complaining, TypesAndProgrammingLanguages is among the most respected books on this subject in the PL community.
Those are mostly examples, not identification rules.
[Have you actually read that book, or are you just making stuff up and hoping it sticks?]
Based on Top's response, I can only imagine he read the title and thought it a summary of the book. Not unexpected from him.
It's your job to apply whatever grand knowledge you have gained from such books and produce the type-detection algorithm or definition HERE. I'm not doing your goddam homework for you. (An exception would be if the book has a ready-to-go algorithm that fits the context.) If you have an articulation problem, that's not my fault. Read an articulation book if you have to.
{Why do we need to do that for you? We aren't the ones experiencing difficulties with standard ComputerScience and SoftwareEngineering concepts, terms, references, texts and definitions.''
You are experiencing difficulties, you just don't realize it. You can't translate type-ness into a clear, objective algorithm/rules/formulas here. Just admit your failure so you can move on to the next stage.
I'm curious: what do you believe to be the "next stage"?
Re: "This all started when you guys claimed such already existed for "types"."
This all started when who claimed what already existed for "types"?
The Wiki Warlocks of White-Space Handles.
Huh? I don't mean a name, I mean cite a page and a paragraph within it, or provide a quote.
Garrrsh, I don't flippen remember where I saw it. They didn't dispute it, they just claimed I "don't get it".
Are you sure you aren't reading too much into what someone wrote? On WhatMakesSoftwareFlexible, for example, you accused your correspondent of things that were not the case.
When you don't use handles and your behavior matches that of other non-handled posters who have aggressive personalities, those kind of mix-ups happens. And I said "it appears as if....", it was not an outright accusation. You have to admit the circumstances look suspicious.
Anyhow, if you don't believe a clear definition of "types" already exists, then just say so. I thought it was a given claim by you no-handlers.
You implied it again above when you talked about "standard ComputerScience terms" etc.
See Also: TopsTypeDeterminatorChallenge
CategoryTypingDebate
AprilZeroEight