Definitions of certain words, such as "types" and "values", are either vague, or arbitrary in that the definition author selects a specific model or implementation out of multiple models or implementations that could serve the same purpose. Thus, if one cranks up the specificness knob, the arbitrary-ness indicator meter also goes up. The relationship is reminiscent of a wave function collapse in quantum physics.
Now, maybe it may be possible to find a definition that is not either VagueOrArbitrary, but so far such doesn't exist. --top
They're neither vague nor arbitrary, and they have standard definitions in ComputerScience. A type is a set of values and zero or more associated operators. A value is a representation and a reference to a type. You probably find these definitions vague or arbitrary because they're abstract. Some people who are used to dealing only with concrete implementations are not comfortable with mathematical abstractions, and may confuse their discomfort with abstraction with an apparent vagueness or arbitrariness of the abstraction itself. If these definitions were genuinely vague and arbitrary, we'd all find them vague and arbitrary. The fact only you find them vague or arbitrary is evidence that the issue lies with you, and you alone. See VagueVsAbstract.
As shown in TypesAreTypes, others find such words messy also, at least when applying them to our usual tools of the field. As described in related topics, "associated with" is too open ended to be practical.
On TypesAreTypes, the conclusion is that understanding types is tricky. There are pitfalls to trap the unwary, as your confusions demonstrate, but that doesn't mean the definitions are vague or arbitrary. "Associated with", by the way, is an abstraction. As an abstraction, it's precisely as specific as is needed. Any more specificity would no longer be an abstraction, but a specification or an implementation.
Re: "A value is a representation and a reference to a type." - Where are you getting this from?
It's a trivial conclusion from reading stacks of ComputerScience, ComputerArchitecture and computer technology journal papers and textbooks, though it's most succinctly and explicitly described in ChrisDate and HughDarwen's texts, since they are particularly careful to distinguish types, values and variables. Their books are replete with citations, by the way, so their exposition is neither personal opinion nor unsubstantiated.
PageAnchor bit-sequence-01
It's also logically evident: Given a "value" 1101011111010110 of unspecified type, what is it?
Without a specified type, the answer can only be "unknown". Thus a value is only meaningful if its type is known. Since each value must be associated with a type in order to be meaningful, it makes logical sense to describe a value as being some representation -- some static string of symbols or states, e.g., 1101011111010110 -- and a type.
There are languages that leave it up to the programmer to know its "type". It's in the head, not in the machine.
It's in the machine. If you choose the FADD operator for adding floating point numbers, it assumes each value is of type 'float'. In other words, the association between the representation and the type (float) is asserted by each operator. If you use FADD on integers or character strings, you'll get incorrect or meaningless results. The fact that the programmer chose to use FADD, instead of the machine choosing to use FADD, is irrelevant.
The head can be far away from the machine such that's its clear that type info can be far away from the value/representation/data/content or whatever you are calling it today. There is no Pope that forces them to always be together. Compilers often do it also: they toss type info from the EXE file such that only raw "values" are passed around. Typness is thus in the source code, but not the EXE.
That's fine, and it's called TypeErasure. There's no notion of "together" or "apart"; all that matters is that there is an association between the representation and a type, such that given any representation the machine knows its type. How, where, and when that association is made doesn't matter. It might be explicit and "close together" at compile-time, but at run-time the association may be asserted (i.e., assumed, as per the above) only by the machine-language operators that were chosen at compile-time.
You can always disassemble the machine language of any binary executable and trivially use the machine language instructions to determine the types of their operands. This proves that the association between representations and types still exists, even in machine language.
That's not true. Perhaps only the reader of the output knows the intended type/use/context of something and the machine language leaves no clue. Besides, even if there are clues in the machine language, it's humans making that determination, not machines. A network router is an example; it's just moving bytes around and may have no info about the bytes' intended use.
Intended use is something else entirely, and it's irrelevant. The machine language will tell us, unambiguously and definitively, whether values are integers (often of various sizes), boolean values, binary coded decimal (BCD) values, floating point values (in machines that support them natively), or (in machines that support them natively) character strings. Types not natively supported by the machine may be opaque to a greater or lesser degree at a machine-language level, but they're always composed from primitive types that are natively supported by the machine, which can always be trivially identified from the machine language.
No it won't, not in all cases. You are incorrect. And even when clues are available, they may be very indirect; it's more like detective work rather than a simple look-up of associations or attributes.
No, every machine language instruction is documented, unambiguously, what the types must be of its operands. From this, every operand of every operator is trivially identifiable as to its type. Obviously, high-level types may not have a direct mapping to some corresponding machine type, but that doesn't matter. It's sufficient that every value must belong to, or reference, a type.
For example, in a type-tag-free language such as ColdFusion, App X may receive the byte sequence "74", and pass it on to another sub-system Y as-is. That sub-system may do type-related stuff to it, such as add (number) or append it (string) to something, but X and/or I as a developer may not "know" or care about what Y does to "74". We cannot answer the "type" question at Page-Anchor bit-sequence-01. From app X's perspective, it's merely a package to be delivered. It's a "value" to be moved, NOT interpreted (in X). It still has "meaning" in terms of being a package we are entrusted to deliver. Thus, having meaning does NOT require having type info "known". And it may have meaning to process Y (which we don't see). The "handler" of info and the "processor" of that info may be closely related or close by, far removed, or something in between. App X is treating it as merely a value devoid of type info.
An operation that copies an arbitrary section of raw data -- like a memcpy operation -- is sometimes considered to have an operand of type string. In another sense, memory copy operations are the only truly "typeless" machine operations. However, by definition, such data is opaque. A "copy" merely locates data in a place where operations can be performed on it. We do not consider the types of data that are passed down a network cable, for example, but it is critical for the software at the network endpoints that send and receive the data to know the type of every value. Do not conflate such bulk data copy operations with operations on values.
Our systems typically have what may be called "root types" such as "binary", "bytes", "sequence of bytes", etc. It's imposed by our existing hardware. (I challenge you to implement position-free bytes.) But those root types are at a lower level than the issues of contention.
They're normally called primitive types, but they're not called "binary", "bytes", or "sequence of bytes". Those terms refer to arbitrary data. They're normally called integer (of varying sizes), float (of varying sizes), boolean, and string. All higher-level types are composed from these. As I noted above, "high-level types may not have a direct mapping to some corresponding machine type, but that doesn't matter. It's sufficient that every value must belong to, or reference, a type."
The bottom line is that the typical definitions do not dictate specifics, such as data structure, element relationships in memory, explicitness versus implicitness requirements, etc. That leaves open multiple "implementation" and modeling choices such that those definitions will NOT answer most language-specific and machine-specific questions/issues/conflicts. To pretend they do is wrong, period. There are rough/general patterns to what people call what in specific implementations, but they are just that: rough/general patterns (based on a combination of habit, history, and textbook knowledge).
Of course. The typical definitions are abstractions, which appropriately encompass the widest variety of current and possible implementations.
"Associated with" can be explicitly in the machine's data structures right next to each other, on different machines that run at different times, or some of it purely in the mind of the reader or developer. The definitions don't choose among those. Why exclude the human mind from "computation"? Why should such a definition make a distinction between silicon computation and wet carbon-based computation (brains)? Computing is computing. As is, they put NO limit on "associated with", and that includes human brains far far apart in distance and time.
Of course. That's what "associated with" means. It is sufficient that an association of some sort exist in order to say that an association exists.
R2D2 is merely carrying an encrypted message "1101011111010110" (a "help" call from Ms. Leah). The reader and interpreter (understander) of that info may be far far in the future. The definition as given does not preclude that "association". It does NOT say anything remotely like, "the type info must be within 7 feet of the value info". Nada, Zilch about that.
If anybody clearly sees something in those definitions that restricts time, distance, wall thickness, and/or processor type (silicon, WetWare, etc) of and between the "parts", please identify it. Otherwise, It's fair to assume that time, distance, wall thickness, and/or processor type are NOT DICTATED by those definitions.
In light of this, "types" can be viewed as information that helps one interpret a "value" (sometimes called a representation, data, content, etc.) That "type" information can be in the machine OR in the mind. The base definitions do NOT specify the nature or position of such "type" information.
That is essentially correct. However, no machine can function usefully with type information that exists only in the mind. Otherwise, how can any machine (or mind, for that matter) meaningfully perform any operation on 1110101010101011101010101010111001101? Of course, even it has a trivially-identifiable type -- bit string of finite length -- which if nothing else, allows some memcpy operator to copy it from one location to another. However, a machine that only performs memcpy is not particularly useful.
- That's at the machine level such that it's a UselessTruth for our purposes. It's like saying, "all info has to be binary for binary chips to process, therefore, everything in the computer belongs to the type "binary". Pretty obvious. Machines crash without rules/conventions such that a minimum set of rules/conventions is necessary to practically function at all. But I'm talking about higher-level info such as a sequence of bytes that may later be used as a "number" but in which a given specific application or system doesn't contain that eventual usage information in it. The final interpreter (understander) may be a human: the reader or a programmer without any clue about that to be found inside the machine. I agree with you that "type information" is necessary to understand/interpret "1101011111010110" (your example), but I DISAGREE that such info has to be inside a machine to qualify as a "type" (per definition). I see nothing in the definitions that exclude humans from being the "operators" and/or being "associated with". We can work on the assumption that humans are excluded, but that changes things. In colloquial-land humans often can be the arbiter of what some info "means". Example: "I know that VF23YK is a special date stamp (type=date) because I once worked at that soup canning factory". The dumb computer doesn't know it's a date stamp, it's just passing bytes around and printing them on cans.
- No, it doesn't say "all info has to be binary for binary chips to process". What it says is that all values are typed in order to perform computation. For example, to perform addition on floating point values 01000000010011001100110011001101 and 01000000001000000000000000000000, we have to use a floating point addition operation that recognises the format of its operands. Thus, there is an association between the type 'float' (recognised by the FADD instruction) and the values passed as operands to FADD.
- Who said anything about addition? Addition was not part of my example. True, a later step and/or human may decide to do math on a value, such as manually converting VF23YK to a "regular" date and then subtracting from the current date to find out how old the can in the example is. But it is the human mind that keeps the "type tag" in their head, not the machine. The information about how to interpret "raw" info does not have to be IN a machine.
- This is not about interpretation of results, but about the inextricable association between values and types. The problem appears to be that you are insistent upon there being some "type tag" -- some explicit datum bound to the value -- in order to indicate an association between a value and its type. There need not be any such thing. The fact that FADD (which is simply an example of an operation on values) works on floating point values and not integers or strings is sufficient to establish an association between its operands and type 'float'.
- My context was the type of the value/variable, and not the general "type" called "floating point".
- The type of the value, in the case of FADD (it's an x86 instruction), is 'float'.
- How do you know "The type of the value...is 'float'"?
- It's an operand of FADD, which only performs floating point addition (hence the name, "FADD" = "Float Add") on float operands, because that is how FADD is defined. If you wish to examine it empirically, you can look at the microcode definition of FADD, or the underlying electronic circuitry.
- What's that have to do with the discussion? You didn't answer the question.
- You changed your question. I've changed my answer.
- I only intended to clarify. I should have noted the text change, my bad. Your description seems to imply that ANY info it processes "becomes" a floating point type. So if blenders are designed for (intended for) processing human food, if I put Styrofoam in the blender, does the Styrofoam then "become" human food? A good description of FADD would say something like, "Processes the input bytes AS IF they are were floating point" and not imply they are "turned into" floating point, the second would be misleading, suggesting a "proper" conversion is taking place, when in fact no vetting is being done. "Treated as" is not the same as "is". Existing writing is often vague about these sort of things. I suppose is-ness can be per scope or relative, but again writing should be careful about stating any such scope assumptions so that the reader is not required to guess.
- You've misunderstood my description. "Info" doesn't "become" a floating point type. There is no conversion involved. Values must be of type 'float' if they're operands to an operator that performs operations on type 'float'. Otherwise, the operator would produce meaningless results.
- Please clarify what you mean by "must" in this context. It's generally a human legal/social construct. I'm not sure if it has a clear meaning for our purposes.
- The operands of a FADD instruction are 'float' -- they can't be otherwise, so we say they "must" be 'float'. If you try to invoke FADD with non-float operands -- say, a tiny int (i.e., a single byte) operands -- the result will either be meaningless or cause a segfault, because FADD will try to perform a floating addition on the bits beyond the byte operands -- which could be anything -- because FADD treats its operands as 'float'. A program that tries to do a FADD on non-'float' operands wouldn't be a working program. Therefore, if we find a FADD instruction in a working program, its operands are of type 'float'.
- That's not a FADD spec. They typically take the address given and use that as the starting point for the N bytes the FADD reads. It doesn't care what those bytes are intended for and does no validation. "Must" means nothing here.
- This has nothing to do with validation, and it's exactly what the FADD specification states. It's true it doesn't care (how would it?) what the bytes are, but if it's a working program, its operands must be 'float'. Perhaps it would be more agreeable to think of it as, "... if it's a working program, its operands must have been float."
- That appears to be a tautology. Nor does it say anything of practical value.
- How is it a tautology? The practical value is that by extending the approach used with FADD to every value in a working program, we can trivially deduce the type of every value, and from that we trivially deduce that every value is of some type. In other words, every value is associated with an identifiable type, right down to the machine level.
- Having a set of bytes processed by a FADD does not "make" that set a floating point number (although "make" can be interpreted many ways). We can deduce that sets of bytes it processes were probably intended to be floating point based on usual human behavior patterns, but that's a probabilistic statement, at best. Plus, below you claimed "Intent is orthogonal to the discussion here". I'm puzzled by your use of "deduce". What EXACTLY is being deduced? Ponder this: if there were a bug in the program such that random bytes were inadvertently added up by a FADD, do those random byte sets "become" floating point numbers? The computer does not "know" there is a bug, it's just blindly following given (machine) instructions like the dumb savant it is. The definition says "associated", it does NOT say "processed".
- No, having a set of bytes processed by a FADD instruction doesn't make the bytes a float, they are a float. If the programmer didn't intend them to be a float, then the program probably doesn't work. However, sometimes the same memory may be used as operands for (say) both a FADD instruction (float) and a boolean operation like AND. This is called TypePunning, where a value can be treated as more than one type simultaneously. The uses for this are rare, but they do occur. As for random bytes being inadvertently added by a FADD, its not that the random byte sets become floating point numbers, it's that the operands processed by FADD already are floating point numbers. It doesn't matter what you might have intended them to be; if you use them as operands to FADD, that's what they are -- by virtue of the association between FADD and its operands -- whether you intended it or not.
- Again, the definition says "associated"; it does NOT say "processed". If they really mean "processed" it should say that instead of "associated with". You appear to be artificially embellishing it with your pet rules/models/interpretation.
- Who said anything about "processed"? The code doesn't necessarily have to run for the association between an instruction and its operands to indicate the type(s) of the operands. Using your model, the "type tag" for the value in the ST(0) register -- at (say) line #398204 in some program -- is FADD. Therefore, at line #398204, the ST(0) register is of type 'float'.
- YOU keep talking about processing. If processing is not the issue, then YOU should stop talking about processing. You imply that processing makes a difference (although your writing is vague, roundabout, and poor), but then turn around and say it's NOT about processing. I'm not following you on the rest. FADD does NOT use a type tag. It just adds bytes AS FOUND at 2 given addresses. Note that I'm not familiar with X86 machine language. In my school, they used Z80's and IBM 360 machine language (but I forgot most of it anyhow). If ST(0) means anything special, I'm not familiar with it. It sounds like some kind of logging register to indicate what operation just took place.
- I'm not talking about processing. I'm talking about the presence of machine instructions in the machine code. You're right that FADD does not use a type tag because FADD is the type tag. A given instance of the FADD instruction references two operands (e.g., ST(0) or ST(1), which are registers); therefore the two operands are of type 'float'.
- The mere presence? Please elaborate. Entanglement? This is getting Quantum Physics like. If I read "stacks of books" on Quantum Physics, maybe I'll finally "get" your convoluted view of types, eh.
- The mere presence is sufficient. In a high level language, a definition of 'void p(int x, char y) {}' in a strict, statically-typed language is sufficient for the "mere presence" of 'p(a, b)' to prove that a is int and b is char, assuming no type coercion because machine language does not do implicit coercion. Imagine FADD is defined as 'void FADD(float x, float y)'. If you see 'FADD(a, b)', what types are 'a' and 'b'? (Again, note that machine language does not do implicit coercion.)
- In fact, the description could dispense with mention of "floating point". It takes bytes as input and produces output bytes based on transformation rule such and such. That such transformation rule(s) relates to "floating point" is a description of intent of the tool builders. If you bring human intent into the description, then it becomes subjective, or at least very difficult to verify. If you only describe the transformations without any mention of human intent, there is less chance for problems. The flip side is that we humans relate to intent well: it's a nice short-cut to understanding. But sometimes being quick conflicts with being accurate or precise. There is a time and place for both approaches. (If we refer to existing specs with "floating point" in the spec title, then obviously we are going to mention "floating point". We humans often name our algorithms by intent.)
- Intent is orthogonal to the discussion here. An operator that acts on 'float' operands must, by definition, have operand values that are of type 'float'.
- Validation is not required for FADD's. There is no "must" I can see.
- This has nothing to do with validation, and the "must" is simply a convenient word for how we interpret the inevitable relationship between an instruction -- whose operand values are always of a specified type -- and its operands.
- How "we" interpret, or how YOU interpret? Your mind is not the canon of types.
- It's how ComputerScience interprets it.
- Prove it. Further, if CS has to "interpret it", then it's not a self-standing definition.
- See http://www.intel.com/Assets/en_US/PDF/manual/253666.pdf page 3-329. "Source operands in memory can be in single-precision or double-precision floating-point format ..."
- But it does not define "be". My re-interpretation of that passage would be "Source operands in memory can be TREATED AS single-precision or double-precision floating-point format [, per commands given based on your human intent]." Most likely, "be" is a colloquial shortcut for intent and/or "treated as".
- It doesn't have to define "be". In short, FADD accepts 'float' operands and FADD adds them. What else could they be but 'float' operands?
- I'm trying to form a clear-cut written rule based on your descriptions here. Are you saying that if a FADD instruction is likely to process byte sets A and B in the future, byte sets A and B become the floating point type? I'm not saying if FADD has processed them, because you seem to be against basing the rule on processing per above.
- No, if FADD will process A and B as operands, and it's a working program, then A and B are 'float'.
- Future prediction? Let me probe this with some questions.
- #1, what if the code is all set up, but nobody ever runs it. Are A and B still type "float"?
- Yes. Likewise, if I write a little C program here, 'int main(int argc, char **argv) {float p;}', no one need run it for 'p' to be type "float".
- #2, if somebody deletes the FADD instruction that references A and B, are A and B still type "float"?
- If nothing references A and B in a given system, then they are not values in that system. They are not float. They are not values. They're unused memory locations or registers.
- #3, if the next instruction after the FADD does a toCaps (case) operation on A and B, are A and B still type "float"? Are they also type "string"? If they stop being Float, at what point in time does this happen?
- If A and B have multiple types based on the instructions that reference them, then TypePunning is in use. Thus, A and B are a union of float and string. See the example on the TypePunning page.
- PageAnchor Unnatural-Dynamic
- This tends to go against how developers typically think of dynamic/multi-use types. Whether they are "officially wrong" or not is not worth arguing, for vocabulary is ultimately shaped by the masses, not ivory towers.
- What are "dynamic/multi-use types"? TypePunning is a very specific technique used to circumvent the type system in order to extract additional information from a value -- such as to examine individual bits in a 'float' representation, for example -- or (mainly historically) as a way to overcome severely constrained memory limitations.
- I meant mental models, not implementation details. For dynamic languages/systems, developers tend to view types as situational. "I want to use X as a Y in this spot". Whether it is "is" a Y internally/globally is of secondary importance (assuming we agree on what "is" means; I'm in the Bill Clinton camp on that.) Considering the entire pot of "operations" that may/will/can operate on it is unnatural (per WetWare). (It is natural in compiler-land where you can't compile if one single "signature" is out of place.) The reference definition above forces a kind of "entire pot" view. I want a definition of "types" that fits actual usage and mental models; otherwise it's less useful. I'll even take a UsefulLie over a UselessTruth if it comes down to that; I'm a practical guy and don't give a crap about purity/canons if it interferes with everyday economics. -t
- I'm not clear what point you're trying to make. What does it have to do with TypePunning? Note that it is not the same thing as, say, being able to sum numeric strings and get a numeric result.
- I didn't introduce TypePunning into the topic, you did. But it seems TypePunning is one part of a bigger problem, which is that the definition is inadequate for practical use in dynamic languages, per above, because it requires an up-front GodView? of the entire program to apply effectively. Technically we could apply it, but not practically. This is why actual usage of the language of types is often different or overloaded in dynamic-land.
- The term for "[i]f A and B have multiple types based on the instructions that reference them" (from your text, above) is TypePunning. It isn't anything else. I don't follow any of what you wrote after "But it seems ...", or how it relates to dynamic languages or definitions. What is a "language of types"?
- By "language of types", I mean how types are typically talked about and used in the field (around dynamic languages, in this case). If the rest is not clear to you, I'm at a loss to "fix" the text at this point. I don't know where the communication breakdown is happening between my writing and your mind.
- Ah, "language of types" makes sense now. I still don't see how "TypePunning is one part of a bigger problem", how "the definition is inadequate", why an "up-front GodView?" is needed for anything, or why we "technically ... could apply it, but not practically." TypePunning is rare. You'll rarely see "the next instruction after the FADD does a toCaps (case) operation on A and B, are A and B still type 'float'", if ever.
- In dynamic languages (DL) it's common to use the same given variable AS multiple types. For example, I may do arithmetic on a given object/variable, but then pass it to a screen formatter API that processes it AS-A string. Whether that results in TypePunning at the machine code level I cannot say because the interpreter hides that implementation detail from me. (Under some models with the same IoProfile it may, under others it may not). But to ask "is this variable ever used for another purpose/type" is not a realistic question in DL-land, at least not as it typically relates to "type" discussions for DL. Typical answers are situational. It acts as or does X at spot Y; that's all one is typically concerned with the majority of the time. Whether it acts as or does X or Z somewhere else is usually considered moot. Now for compilers the global view matters more and thus a "static" language programmer may think more in terms of a global view of a given variable's "type". In dynamic langs taking inventory of the "set of values and zero or more associated operators" (taken from the subject definition) is not a practical reference point.
- That's not TypePunning. Using a variable (or memory location, register, parameter, etc.) to hold a value of type T now, then assigning it a value of type T' later, is the very essence of a DynamicallyTyped language. Note that your question from above, "if the next instruction after the FADD does a toCaps (case) operation on A and B, are A and B still type 'float'?" is very different from "if the next instruction after the FADD assigns new values to A and B, and then the next instruction after that is a toCaps (case) operation on A and B ..." The former is TypePunning, the later is DynamicallyTyped. What you appear to be hinting at is where a literal of type T is encoded in a string -- e.g., a numeric value "-1.45" in a string. If you have operators that, for example, can add, subtract, multiply or divide such values, it simply means you have numeric operators that accept numeric strings as operands.
- I said "use as" (process as), NOT change. I said nothing about changing the value.
- Then it's TypePunning. I don't know why A would be used as float and immediately thereafter used as a string, but it's conceivable. The usual use of TypePunning is to extract individual bits from a representation, such as determine whether the 17th bit of a 'float' is set, etc.
Typically you have Thing T (the type), Thing D (the data/representation), and Thing U (the interpreter as in "understander"). The location of these 3, in a brain or a machine, is not restricted by the traditional definitions. "Stricter" languages tend to put more of them in the machine, and/or rely more on the machine to track them.
That is true. The machine needs to know T in order to perform U, otherwise D is opaque.
Note that these 3 "things" can be reprojected into an operator-and-operand view of things to fit the traditional definition. It's just a matter of how one views the packaging of what they label as an "operator" etc. The Thing X view is a better projection for dynamic and loose-typed languages in my opinion, with the operator/operand version a better fit for "stricter" languages. Which you choose is a matter of convenience, for they are both just different views of the same thing somewhat similar to how polymorphism and IF statements are interchangeable for implementing conditionals.
-t
Sorry, that last one lost me.
I'll see if I can think of a better way to word it. The short version is that just about ANY system can be viewed/mapped/re-represented to be "a set of values and zero or more associated operators". However, the human convenience of that view varies widely per tool flavor, usage, etc. Static languages tend to be closer to that view as-is.
No, not any system "can be viewed/mapped/re-represented to be 'a set of values and zero or more associated operators'". How is an IF statement or a WHILE loop "a set of values and zero or more associated operators"? How is the sequence of I/O operations to initialise a printer "a set of values and zero or more associated operators"? How is the 'main' method of a program "a set of values and zero or more associated operators"? How is an arithmetic expression "a set of values and zero or more associated operators"? How is an assignment of a value to a variable "a set of values and zero or more associated operators"? How is a given method of a class "a set of values and zero or more associated operators"? And so on.
I said "system". Most of those examples are not a system as given. We can make a set of IF statements into a system by packaging them into a subroutine or program. As a working definition, I'll define "system" as anything with input, processing, and output. And if something has input, processing, and output, then the input and output can be viewed as "values", and the processing as an "operator". Thus, every system satisfies the definition. -t
How is "input, processing, and output" a "set of values and zero or more associated operators"? It's not sufficient that "input and output can be viewed as 'values'"; the system itself (and a type is not a system, because it doesn't have "input, processing, and output") would have to be a set of values and zero or more associated operators. How does it do that?
Why is it "not sufficient"? Please elaborate.
A type is 'a set of values and zero or more associated operators', so if system <x> is a type, then system <x> must be 'a set of values and zero or more associated operators'. That's not the same thing as system <x> accepts a set of values as input, performs an operation, and produces a set of values as output. To be a type, the system must be a set of values and zero or more associated operators.
To Be Or Not To Be
How is it not the same thing? How exactly is this be-ness applied and objectively verified? This be-ativity thing of yours is confusing the shit out of me. Types are a mental UsefulLie, not an objective property of the universe. The universe does not give a shit about types or sets, those are human mental abstractions (UsefulLies). A program is an operator as much as an ADD instruction in machine language. It operates on source code and input data (per common colloquial usage). It's not any less is-ificated to operator-ness than the machine instruction.
You're operating under the incorrect assumption that types are only some mental construct, and your misunderstandings appear to follow from that. In given program, 'int' is not a mental construct, it's a programmed definition of what bit patterns are an 'int' (thus forming a set of values) and a set of operators that receive 'int' as operands and/or return 'int'.
Bullshit! Machines are just molecules bumping around; they don't understand integers etc. If a rock slips off a cliff and happens to smash you, it does not "understand" death or killing, it's just blindly doing what molecules do. "Killing" is a concept in the heads of humans, not rocks. Same with integers. If another human purposely positions the rock to fall on you, that does not change what the rock thinks or knows...nothing.
Huh? What does "understand integers" have to do with what I wrote? I wrote that 'int' is not a mental construct, but a programmed definition of what bit patterns are an 'int' (i.e., a set of values) and a set of associated operators. A programming language type is defined with code, not cognition. 'Int' isn't produced by thought; it's a representation (usually 8, 16, 32 or 64 bits) that defines a set of (256, 65535, 4294967296 or 1.8446744e+19) values and operators like +(int, int), -(int, int), etc.
What's a "programmed definition" exactly? The definition is in code? Boggle. It appears to be anthropomorphism of machines. You don't "define with code" you implement HUMAN ideas with code. Using our death-rock example, you don't "define" a death device by putting the rock on the ledge, you IMPLEMENT a death device by putting the rock on the ledge. The machines are just dumb savants.
A "programmed definition" is something like 'class p { ... }' which defines 'p'. There's nothing anthropomorphic about it; it's just a term used to indicate that some code has been associated with an identifier.
I'm having difficulty explaining what I wish to explain. I'll have to ponder it a while perhaps.
See VagueVsAbstract
CategoryDefinition