Falsifying Existence Of Types

Can the absence of instances of "types" be objectively detected?


Oh look! Another ThreadMess!

Please, please can someone take the heat out of this and find some content, or delete the content-free bits? Does the page TypesAreTypes help at all?

[Ask clarifying questions and/or help consolidate them rather than complain about it. ThreadMess is a symptom of a need for improved communication.]


Page is now defunct. Top has properly admitted that random white noise cannot internally constitute meaningful structure or types, so now he has a sufficient example of 'foo is not a type' in the context of programming or computer science.. This is after, of course, he proved remarkably dense at comprehending that cats and dogs and 'red' are also not types in the context of computer science, but you can't expect much from someone who proudly professes belief in contradictions like 'EverythingIsRelative'.

Congratulations on your hollow victory in which a random blob is the only thing without "types". A nice UselessTruth as been created. (And your "red" comment is misleading). If one took a listing of ANY programming language and asked you to circle the "types", everything would be circled. In the meanwhile, a practical applicable definition of "types" is STILL as elusive as a unicorn. See ya at the Bigfoot Convention, and don't forget to try the pinecone-banana smoothie, a BF favorate. It has the texture of a random blob. By the way, "random" may not be fully testable either. From a practical standpoint, random just means a pattern has YET to be found. It is thus tied to the observer. --top

Now your idiocy shines through once again, top. Excuse me while I bask in your ignorance and your seeming confusion that 'admitted thing' = 'only thing', and your apparent inability to comprehend that broadly applicable definitions aren't generally impractical.

Some things that aren't types are (a) anything that is not a specification or type-descriptor, such as cats and dogs and the color red (and I'm talking about the real things, not the words 'cat', 'dog', and 'red'). This is NOT misleading; it's simply true, and rather trivial, fact. You can't blame me for fools that make a habit of balking at simple truths. (b) any specification that is self-contradictory (e.g. the set of all integers between 0 and 1 exclusive) is not a specification for anything at all and is therefore not a type. (c) (for communication types) any structured communication where the greater structure is accidental (i.e. where you ask for a string and I give you well-formed XML), the greater structure fails to qualify for 'type' in the communications context because you can't use it as you could typed communications (e.g. no assumptions, no security, etc.); 'random white noise' is a clear and very obvious case of this, since it makes it obvious (even to top) that any greater structure is accidental. All these things follow from a rather 'broad' definition of 'type' in the context of communications (which is a supercategory including all instances of language, including all instances of programming language).

Maybe, someday, you'll learn that all simple definitions are very broadly applicable, often difficult to comprehend in one go, AND no less practical for it: mass, energy, information, number, percept, proposition, predicate, pattern, monad, relation, type. While you can gain some advantage of speaking of specific forms of these things (e.g. qualified patterns, binary relations, or 'variable' types), you'd be a fool to mistake it for the only possibility. Of course, being a fool would hardly be a new precedent for HostileStudents who, like top, very consistently have an agenda other than comprehension.

I didn't ask to be your gaddam student. I asked for a clear, falsifiable definition. It was a specific request. If I wanted a fscking teacher, I'd find somebody far more pleasant.

FIRST, you asked to be a student the moment you asked someone else to provide you an answer. It just so happens you asked me. So, YES, you did ask to be my gaddam student. If you don't want to be a student, then find your own answers, you prick. And you'd be a fscking HostileStudent no matter who provided an answer.

Also, I suggest we stick with programming languages and apps instead of the "real world" to simplify the debate.

Irrelevant. Specific conditions can only make things more complex, not less so. If you want to 'simplify the debate', you take conditions and requirements away; you certainly don't add them. Attempting to limit the debate to 'programming languages and apps' first requires defining 'programming languages and apps' which I believe to be another long and largely pointless venture in the quest to reach a falsifiable definition of types. Like all simple concepts (e.g. relation, number, monad), it is better (simpler) to define types generally then explore how such a definition applies in the typical context of communications or programming.


(Moved from OpenProblemsInComputerScience)

Without a clear and falsifiable definition of "types" ("foo is not a type"), this discussion appears to be on the express train to LaynesLaw. --top

''The number one (1) is not a type, it is a value that corresponds to a type. There is your falsifiable definition. If that type specification happens to allow the number one in, then good. If not, then run time error, warning, compile time issue. The color "red" is not a "car type". Another falsifiable definition. If red is not just a color but also a style of a car (luxury defined as red), then this can be a type of car. Clear meaningful types would not include such silliness, and red would only be a type of color.

One can model car colors as types/subtypes if they really want to. Any combo of attributes can be turned into a subtype tree (which is not the only form of types). Or for that matter, integer instances. That's the point: "can" versus "should". Your "definition" is a TautologyMachine. And, howabouts we move this debate to another spot. --top

I said to use clear meaningful types. carColor := red is not a "car" but a "carColor", TopMind. TCarColor is much more specific than TCar. TCar does NOT equal TCarColor. TCarColor may be how you (confusingly) define what a car is.. but it is just the Color Of The Car, hence TCarColor, not TCar. As for the definition of a type: do some reading on Hugh Darwen and Date: http://www.dbdebunk.com/page/page/1812707.htm Types are very specific, and involve precision. Date/Darwen go as far as wanting to constrain types to the most precise way possible (not allowing Quantity to be added to Weight). In fact, their articles remind me very much of what I described in ConstraintType. And that is exactly what it is all about.. a type that constrains so that we have integrity (as opposed to it being about saving developer time with sloppiness).

"Clear and meaningful" is subjective. carType := redCar "compiles". If this is changing from a debate on what is and isn't types into "good types" versus "bad types", then you are both changing and complicating an already complicated topic. --top

Top, are you really indicating that you honestly believe 'red' (the color) and 'redCar' (a classification of vehicles based on their paint job) are exactly the same thing? Or are you just twisting words and playing the pretend idiot to avoid confronting actual argument?

I showed that your statement, 'The color "red" is not a "car type".' can be wrong. If I misinterpreted your statement, it was not intentional. No need to accuse me of spiking the punch. I would suggest rewriting it if it was ambiguous. But, we are drifting from the "falsify" issue: what is objectively NOT a type. You still have not produced something that cannot be interpreted as a "type". You can make up your own little car model and within that model state what is a type and what is not a type based on your own rules and vocab, but that is simply making your own world with its own rules and own vocab saying what is a type in your model and what is not a type in your model. In short, cheating via a WalledGarden HumptyDumpty. It is possible to create a subtype tree of cars based on color, as shown below. But you proved yourself wrong anyhow by saying that "red" is a type "color". That is not falsifying the existence of types in something. It is existence that matters here, not alleged misapplication. --top

No, you have shown no such thing, Top. What you have shown is that, no matter what Lars produces, you can twist it in your mind to the point you can misinterpret it as a type, but unless you establish that whether something is a 'type' is subject to your interpretation, your argument holds little weight. You can't take 'red' and drive it around... at least not without a much more fantastic imagination than I possess. I won't pretend Lars has a much better grasp on types than you do; IMO, you're both uneducated ignorami when it comes to types.

Irrelevant. The car thing has been misinterpreted. The open issue is that nobody can so far prove that something is or is not a type outside of a custom world with custom vocabulary ("types are whatever the hell I say they are in my model"). Until that's done, my accusation that "types" is not falsifiable and thus not scientific but subjective stands. If you are as smart as your name-calling of others suggest, then barf forth clarity and science and proofs. The car thing was dreadfully misinterpreted by you, Top. Your claim has essentially been that (('red car' is a 'car type') implies ('red' is a 'car type')), which is plainly false. Further, you've proven too dense to recognize the fallacy in your argument; a more honest, more introspective person might start asking: "If I got that wrong, what else might I have gotten wrong?" But not you. You're neither introspective nor particularly interested in learning about your own imperfections. However, your whole accusation - the whole premise you created in your mind for this page - is in error.

There seems to be some confusion about types existing in a model versus types existing in external reality. But without a solid consensus definition, I cannot clean up this example (unless I make a model up out of the blue). I cannot turn a sow's ear into a silk purse. I cannot fight fuzz with fuzz (unless the end product is fuzz). I cannot be objectively wrong about something that is not even concrete. Slam me with REAL logic, not your pet notion-math. --top

I'll tell you this much: 'Type' only needs to be defined, and thus only need to be 'falsifiable', within a given 'type system' or context. Further, 'types' are mathematical and logical entities, not 'scientific'. But I won't waste (more of) my time trying to convince you of anything, no more than you should try to explain the RelationalModel to a seven-year-old who only cares about red cars. It would be a waste of my time and yours - a fact proven across a variety of other type-related subjects.

If types only exist in a model and the model maker gets to label what are types and what are not types within that model, then using the term "types" outside of a *labelled* model will almost surely be an express train to LaynesLaw, as the opening statement suggests.

What you describe constitutes an insufficient condition for driving a discussion to LaynesLaw. Using the term "types" within an implicit context, such as 'Relational' or 'Programming', is only an 'express train to LaynesLaw' if one of the persons involved in the argument is being contrary just for the sake of being contrary (e.g. playing devil's advocate and trying to pick apart terminology rather than confront the actual argument being presented). So long as the discussion itself is confined to examples that members consider to be relevant types for context, there is no problem - even when 'type' isn't clearly defined by the type system, the context itself often makes it clear what can and cannot be logically (mathematically) and pragmatically (in implementation) be utilized as 'types'.

Now, it may be that you consider this an 'almost surely' proposition because YOU will happily take the role of devil's advocate and bastard who fusses about terminology 'rather than' instead of 'in addition to' confronting the actual arguments.

PageAnchor: "routine types"

The original claim that triggered all this was that "types objectively make programs better" (paraphrased). How can you make such a universal statement if the truth of "types" only exists within a given language/model? Either you need a universal model/def of types, or you need to test every language/model to back up that claim. And, "types objectively make programs better" *is* a scientific claim.

First, "types" in the context of "programming" has a fairly clear context, I'd think - it can refer to any type definition for any system ever applied to programming, which necessarily means types applied over a programming language, since every instance of programming is done in a programming language. Second, the claim is obviously not that every possible type-system makes programming better. To be 100% logically correct, the claim that "types objectively makes programs better" can be demonstrated with one example: ONE (arbitrary) property provided by ONE (arbitrary) type system over ONE arbitrary programming language that has clearly makes programs 'better' by ONE objective measure that all parties of an argument agree upon as constituting 'better', and makes it little or no worse by other objective measures. That is sufficient to show that 'types' can make programming 'better'. Your notion that: "Either you need a universal model/def of types, or you need to test every language/model to back up that claim" is, very simply, a rather significant misunderstanding on your part.

And it is true that '"types objectively make programs better" *is* a scientific claim', but it doesn't require that 'types' be defined scientifically. Your logic would be akin to saying that "strings" need to be defined 'scientifically' (e.g. based upon empirical observation) to make any claims about use of strings in programs. It would be dreadfully silly.

Only if both parties agree what "strings" are up front. One does not have to find a universal definition if there is such an agreement. I used to think that types were the same way, but the likes of the "routine type" scenario above has made me gun-shy.

Both parties only need to agree upon the essential properties of 'strings' (i.e. 'ordered sequence of distinct digital values'), not the issues of zero-termination, maximum lengths, representation, or the exact character set, et. al. Note, thus, that they don't need to decide upon the exact model of 'strings' any more than, to discuss types, they need to agree upon the exact model of types. Further, note that the 'shared properties' do not constitute a 'definition' of string, merely a set of limits over the possible definition. FURTHER, note that these limits are not 'scientific'. Sheesh. An educated ComputerScientist or mathematician would know this stuff. Are you JustaProgrammer?

Now, my own claim (in OpenProblemsInComputerScience) was slightly stronger, which is that types ALWAYS are better for programming. I didn't say which or 'all' types; the only universal I applied was over 'all' instances of programming. But I stated, in this context, anything short of treating bits in memory and on HDD as one massive, unorganized, undifferentiated BLOB, as having types. And I made that context very clear - I stated it very directly in OpenProblemsInComputerScience - which even a devil's advocate like yourself can't rationally deny. Having 'files' as distinct from 'the blob' would represent a type. Having 'file formats' as distinct from 'binary files' would also represent types. This context treats 'types' rather liberally, but, if you wish, I can support my view by pointing at some very common programming languages (i.e. raw machine code) that truly do treat memory and files in exactly that way, thus demonstrating that my view is 'practical' and applies to 'real life'. I can even show this definition of 'types' is falsifiable: clearly 'dogs' and 'the color red' aren't 'types' in this context (introducing something of a fundamental concept mismatch - 'dogs' and 'the color red' can't even be applied in the context of differentiating BLOBs - and, top, before you think such an argument is "silly", keep in mind that any definition only needs one thing it applies to (this is a type) and one thing it doesn't apply to (this is not a type) before proving its use as a definition and that it is not a 'TautologyMachine', no matter how trivial). Further, "let's treat everything as one big 'blob'" isn't a 'type' in this context, since that's exactly what types were defined to not be. If you wish to oppose this position, you MUST argue that treating everything on memory and on HDD as one massive and undifferentiated BLOB is no worse for performance, correctness, security and various other objective measures than application of 'types'. This, you haven't attempted. You'd much rather fuss about terminology than confront such an argument.

And my point was, and is, that once you agree that having at least two types (e.g. executable machine code vs. other data) is 'good', after that it really is just a matter of degree, and any argument that there is a clear 'point' at which more distinctions aren't better had better be provided with with a good bit of logic lest it be a slippery slope argument. I happen to think that making 'the right' distinctions is important, and admit that wrong distinctions can make a system inflexible. But I also happen to think your justifications that 'strings' and 'integers' and 'dates' and whatever selection of types it is you've decided upon happens to be sufficient for all relational programming is woefully inadequate and entirely irrational.

If your definition of "types" is broad enough to include any "atomic unit" one encounters (routines, statements, columns, tables, etc.), then of course I am going to agree that "SOME types are good". But if your definition is indeed this broad, then it cannot be falsified in any practical sense since every language/IT-tool ever invented has at least some kind of atomic unit. You seem to be backing yourself into a corner. It will be entertaining to see how you wiggle out. -- top

If you agree that SOME types are always good then, if you wish demonstrate that the 'option' is even better, you need to explicitly prove that the 'option' to violate these types either (a) ought to always be included, or (b) that there are some specific circumstances under which types ought to become optional. It seems to me that puts the ball in your court, and leaves ME with a great deal of freedom - hardly 'backed into a corner'. I'm curious as to exactly how you twisted your perception to the opposite. And the application of 'type' can be falsified in the practical sense by looking at exactly those situations where 'types' aren't applied - BLOBs and raw bitstreams. One can also look at those locations where type-correctness isn't enforced.

(Plus, I would disagree with such a broad definition, mainly because it waters down the concept beyond anything meaningful in the practice.) -- top

I'm quite certain that nothing is 'watered down' by use of this broad definition. Can you show provide an example of where the definition is meaningfully 'watered down', or is this just another 'claim'? (remember: "Claiming does not make truth -- top").

Claiming that words in the dictionary exist does not make the dictionary truthful either. Yet people use the dictionary as a reference. Consider a type system as a point of reference for sensible people. The brave person that declared 1 + 1 = 2 was also just making points of references.. and it is not correct math, because in my world 2 + 2 = 1 (two bananas plus two bananas always equals 1 group of bananas)


Example 1:

No one claimed that there couldn't be red cars... what was claimed was RED on its lonesome, was NOT a car itself.

[But a HostileStudent would then go on to claim that someone started up a new car company called "Red", and that HostileStudent recently saw a car outside his window and it was made by the company Red, therefore Red is a car itself. The problem once again is the twisting of words, twisting of logic, and dodging the real issue. The techniques used by the HostileStudent are very clever, but lead to much confusion. What the HostileStudent needs to understand, once again, is the clear distinction between variable, value, type, and operator. Is the equal sign a variable? or is the equal sign an operator? The HostileStudent could go on to create a programming language where the equal sign is in fact a variable, since computers are just a virtual world where anything is possible. The HostileStudent can actually get caught into the trap of thinking that nothing is falsifiable, because with computers, everything can be remapped to do something else. The equal sign can be overloaded to be something else, therefore equality no longer means equality and equality is not falsifiable, so just like ThereAreNoTypes we create a page called ThereIsNoEquality? (which is silly). This is a clever trick used by the HostileStudent to dodge the real issue and GoAroundInCircles?.]

You are putting words in my mouth. I never said the things you accuse me of. And this is not about "knowing" types when they are seen, this is about creating a "mechanical" objective scientific test for types. It's about rules, not what's in human heads. You can't seem to get that.


I declare that TableOrientedProgramming does not exist. It is a fraud. No one can prove that tables or cells actually exist. Despite what the dictionary says, there are no such things as tables, cells, or top. In fact, humans do not exist either. You cannot prove to me that humans exist. Fraud. This is fraud. This wiki does not exist.. you can't prove to me it does. it is just bits.. and even bits don't exist. Falsifiable? Not useful. --DevilsAdvocate

Being a somewhat vague notion is NOT the same as being useless. "Country music" does not have a clear-cut definition, but that does not stop people from using it to describe a notion among themselves. Somethings they will even argue about the borderline. But it is important for people not to mistake personal notions for universal truths. Further, I don't claim that TableOrientedProgramming is objectively better. It just fits my head better. I'm in the EverythingIsRelative camp. Thats why "mind" is in TopMind. I am not claiming universal truths. Well, except maybe BS is a universal truth. --top

A type is actually not a truth, just as Math itself and numbers are not truths.. that is exactly the whole point of a Type. We are humans and we declare things.. we declare words in the dictionary, we declare structures, we declare that a phone is a phone (even though no one can prove that a phone is a phone, nor can anyone prove that phone is spelled correctly.. unless they first prove the dictionary is right).

What I'm doing top is playing DevilsAdvocate here and I'm trying to trigger your mind.. to see the whole type thing.

Even a boolean is not truly FALSE or TRUE if for example a computer half catches on fire and the bit gets stuck in between.. kind of like a resister. Pretend that your bit switch became an accidental resistor and caught on fire. Is it true or false? Nothing can be defined as TrulyTrue but as humans we have to act sensible and responsible. This is why Math was invented and why science was invented. Even though math and science are not true.

How does this relate to the topic?

Try and falsify math. Try and falsify the number 1. Your obsession with falsifying things doesn't serve any purpose. A type system is just a classification system. Look it up in the dictionary, and see TypesAreNotTypesTheyAreTypes. Try and falsify the dictionary. Is anything TrulyTrue?

Are all classification systems type systems?

Yes, but not all of them would qualify for communications or programming type systems.

Well, let's try narrowing the scope for the sake of argument, then. Is there a clear set of rules to fail "programming type system"? --top

Yes. First you must know the difference between a variable, a type, an operator, and a value. Is an equal sign a type? No, an equal sign is an operator. We just falsified. HostileStudent would then argue that actually an equal sign is a type, and it also is a value. Is the less than or greater than sign an integer? No, the less than and greater than signs are NOT integers. The HostileStudent must understand the basics of programming in order to know what types, variables, operators, and values are. This should actually be taught in math class. Unfortunately in math they just start putting x=1 on the chalk board instead of defining what X is, or declaring it's type or constraint. If math teachers said "X is an integer type" and they had to declare it on the chalk board, the HostileStudent would then understand types.

"I know a type when I see it" is not good enough. Show rules for types, not your guess. I keep stating that and you completely it ignore it, building up strawmen instead.

It is widely known in math that the equal sign is not a number type. Number types are widely known in math:

http://www.purplemath.com/modules/numtypes.htm

Since the equal sign in math is not a number type, we have falsified. Are you going to argue that the equal sign is in fact a number type, and when you write math on paper or on the chalk board, you cannot distinguish between numbers and the equal sign, because they are all the same?

But that's pattern matching of prior encounters, not necessarily a universal truth carved into the universe. For that reason it doesn't scale into the general case, such as domain-specific "types" or making language features be dynamic that you are used to being static. "It's an X because it resembles another thing that was called X" is not rigor.

Resemble? Actually programming includes math inside the programming language itself. Addition, subtraction, multiplication, and number types are included in programming languages. ThereAreTypes?, they do exist.

What if they didn't and you had to implement them via importing libraries? How does one know whether it's a "type" or not?


Material moved from NotEvenWrong on June 2012:

(Quoting had to be re-created, so there may be inadvertent errors.)

Unfalsifiable, Like "types", eh?

Sure, unless you mean "types" in a specific TypeSystem. In which case it is often clear whether something is a type.

{Why limit it to that? Even for the other definitions of "type", there are clearly things that aren't types. But all of this is just pandering to Top's RedHerring. Definitions can't be wrong in that way.}

Top would deny the "clearly" outside of a local, formal definition. Top is the sort of persona that would squint at an eye exam then argue that the enormous E at the top line of the chart was unclear and fuzzy, just to thumb his nose at the ivory tower that is opthalmology. And, if pressed further, he'd note whether it's E or 3 is relative to whether he's standing upright. Never underestimate the blindness of someone who believes his interests are served by it.

Fortunately, OCR has created detectors which can be tested and cracked open in order to see the rules/algorithms/formulas for what passes or fails as an "E". Reality slams your lame dig.

Slam dunk, indeed. Let me know when you find OCR algorithms and rules that you don't think are fuzzy.

I'm not sure what you mean. Note that not everyone may agree with the validity of the algorithm or rules, but at least it is something that is subject to objective analysis and scientific testing, unlike notions in one's head.

{I'm unaware of any OCR that is infallible. In fact, they can't be since I know people who write "1" (one) in exactly the same way they write "l" (lower-case l) and no OCR is ever going to be able to always get the right one.}

It can guess based on context and historical probability. "Types" can also, but I don't think most will accept historical pattern matching sufficient to define "types". Anyhow, the original point is that your analogy is flawed. A definition/model can be "wrong" because it "doesn't compile" (logically inconsistent), and/or because people don't agree it's a sufficient definition/model to convey the concept. If people cared about having an academic/rigorous definition of "E", then the second may become an issue to THEM. So far they don't. But if somebody came along and CLAIMED a rigorous def/model existed, it may be realistic to expect a "runnable" model be produced, or existing models of such claims, such as OCR software, be subject to both kinds of scrutiny. This all started when you guys claimed such already existed for "types". But when the scrutiny started you started making excuses not produce a runnable model.

{Sorry, but definitions that are inconsistent still "compile", they just don't have any referents. It really, really, really isn't a problem. Failing to capture the intended concept certainly is, but nobody that I know of has ever said otherwise. Models (in the logical sense) simply cannot be inconsistent, and since they don't depend on intent, they can't be wrong in the second way either.}

{Once again, you still haven't grasped the fact that well-defined does not imply decidable. There are numerous examples of definitions that are well-defined that aren't (halting on an input for TM's is a rather fameous one). Asking for an algorithm (which is the only interpretation of "runnable model" I can come up with), is indeed unreasonable in many cases.}

{I've made no such claim. That's you, once again, thinking that well-defined implies decidable.}

No I didn't.

{Sure you did, my claim is that "type" is well-defined. You say I'm claiming that my head can determine whether or not something is a type in finite time. I.e. it's decidable. In order for my claim to be equivalent to what you said I claimed, then well-defined must be true if and only if decidable is true. Since it's trivial to show that decidable implies well-defined, this can happen only if well-defined implies decidable.}

You implied it was easy, natural, and common to "identify types", meaning many brains are currently doing "it". Actually, it's difficult to test a definition if it's not decidable (most of the time). How would one otherwise know it matches samples and/or one's "notions"?

It is not difficult. Compare the instances where a decision was made by both parties.

As a "notiony" guess.

Of course. If we're trying to match a definition to one's "notions", it is natural that one of the parties would be making "notiony guesses".

Ideally we want a clear model that multiple parties agree matches their notions of the concept.

{I presume that's the royal "we", because you seem to be the only party whose notions, by in large, seem not to match the rest of ComputerScience and SoftwareEngineering. If you can demonstrate that your views represent some group, maybe it would be otherwise.}

I don't trust that you've done a decent survey. Anyhow, list the notion rules and let's see if we can find a consensus. Example: "Pattern #54: Most types involve constraints". Document it for all to see instead of speculate quietly in your head. -t

{As you are the one making claims against accepted practice, I submit that you are the one who needs to do a decent survey to determine whether or not accepted definitions are acceptable, or your new definitions are needed. Otherwise, with all due respect, why should we pay attention to you?}

What "accepted practice"? There's no standard reference source for such that anybody's pointed out.

I'm sure you've been pointed to standard resources. You probably complained about BookStop. But, in the unlikely case you're more interested in learning than you are in complaining, TypesAndProgrammingLanguages is among the most respected books on this subject in the PL community.

Those are mostly examples, not identification rules.

[Have you actually read that book, or are you just making stuff up and hoping it sticks?]

Based on Top's response, I can only imagine he read the title and thought it a summary of the book. Not unexpected from him.

It's your job to apply whatever grand knowledge you have gained from such books and produce the type-detection algorithm or definition HERE. I'm not doing your goddam homework for you. (An exception would be if the book has a ready-to-go algorithm that fits the context.) If you have an articulation problem, that's not my fault. Read an articulation book if you have to.

{Why do we need to do that for you? We aren't the ones experiencing difficulties with standard ComputerScience and SoftwareEngineering concepts, terms, references, texts and definitions.''

You are experiencing difficulties, you just don't realize it. You can't translate type-ness into a clear, objective algorithm/rules/formulas here. Just admit your failure so you can move on to the next stage.

I'm curious: what do you believe to be the "next stage"?

Re: "This all started when you guys claimed such already existed for "types"."

This all started when who claimed what already existed for "types"?

The Wiki Warlocks of White-Space Handles.

Huh? I don't mean a name, I mean cite a page and a paragraph within it, or provide a quote.

Garrrsh, I don't flippen remember where I saw it. They didn't dispute it, they just claimed I "don't get it".

Are you sure you aren't reading too much into what someone wrote? On WhatMakesSoftwareFlexible, for example, you accused your correspondent of things that were not the case.

When you don't use handles and your behavior matches that of other non-handled posters who have aggressive personalities, those kind of mix-ups happens. And I said "it appears as if....", it was not an outright accusation. You have to admit the circumstances look suspicious.

Anyhow, if you don't believe a clear definition of "types" already exists, then just say so. I thought it was a given claim by you no-handlers.

You implied it again above when you talked about "standard ComputerScience terms" etc.


See Also: TopsTypeDeterminatorChallenge


CategoryTypingDebate


AprilZeroEight


EditText of this page (last edited June 29, 2012) or FindPage with title or text search