Continued from ThirtyFourThirtyFour
"Types" are funkier than quantum physics if we go by actual usage. In dynamic languages they tend to be situational such that we cannot realisticly "attach" global declarations of a specific type(s) onto a given variable/constant. Typeness tends to be relative to the operator in dynamic languages. This differs from static languages where they stick closer to the pure OOP/ADT GateKeeper philosophy and reign in who or what fiddles with a given variable/object. The "type" is best defined by what the operator wants to do with a variable/constant, not the variable/constant itself. Attempts to model "types" from the operator's perspective will just lead to confusion and complicated models. There are usually some commonalities between operators, but generally these are by convention in dynamic languages, not hard-wired into the language itself. More specifically, each operator (hopefully) chooses to stick with consistent type conventions, but such may not be based on an enforced "core" definition. So, if we say two operators can accept the same "type", the sameness may not be determinable via identical references, such as a "master class". (It's arguably a violation of OnceAndOnlyOnce.) --top
Wat?
Okay, a rewording is in order...
In dynamic languages, they are "defined" by the "observer". "Observer" meaning the operation that processes variables (or "values") in this case. This differs from static languages for the most part.
Huh? Do you mean that some dynamically typed languages treat all values as strings, and operators may interpret their string-valued operands as they see fit?
What exactly do you mean by "treat"? I agree they are "best modeled as strings", but the programmer cannot see what they actually are under the hood, and a language ideally shouldn't be defined by implementation.
By "treat", I mean all values are strings of characters. What they are "under the hood" is immaterial and -- indeed -- usually invisible. The programmer sees all values as strings of characters.
How does one empirically verify are-ness? The output is typically a string of characters, sure, but that's the format of the output and therefore a characteristic of the output, but necessarily a characteristic of variables and/or their values.
Check the language specification.
Even if such documentation were well written, which it rarely is, empirical verification is recommended to make sure one's interpretation of it is accurate.
Will empirical verification distinguish between the following?
If you find a difference between the documentation and the implementation, is the documentation wrong or is the implementation wrong?
It's relative to need and use.
That doesn't answer the question.
Yes it does. EverythingIsRelative. Live with it.
That still doesn't answer the question, and EverythingIsRelative is a non-sequitur. This is a simple question: If the language manual and the language differ, how can empirical verification tell which is right and which is wrong? To put it in more concrete terms: If you find a difference between the manual and the language, how can you predict whether the next version will fix the manual or fix the language?
Predicting vendor behavior is a different question than "which is right", because "which path leads to the most profits" is NOT the same question as "which is right". Dammit Jim, I'm a systems analyst, not an investment analyst! Note that sometimes "flaws" are left in languages because fixing them would break backward compatibility. That's a judgement call of the interpreter makers. For example, Php's is_bool() tag-centric behavior is a flaw in my opinion because it's inconsistent with other is_x functions and redundant with getType, but fixing it may break lots of existing programs. It's not necessarily a manual conflict, but illustrates the kind of decisions interpreter makers have to deal with. The manual could perhaps state what the interpreter makers actually intended, but they may opt to live with the difference (flaw) in the next version for backward compatibility (and hopefully change the manual to document the kept flaw). SuccessHasBattleScars.
Fine, but in response to my suggestion to check the language specification for information on how a language works, you wrote, "empirical verification is recommended to make sure one's interpretation of it is accurate." If you can't even tell whether it's the specification that's at fault or the implementation, what is it exactly that you're empirically verifying?
Like I said, in a practical sense most developers have to focus on the existing version of an interpreter in terms of its I/O as-is, and so that's my focus (working assumption). Yes, there are exceptions, as always, and developers have to use their good sense to deal with those as encountered. It's not a court room where "fault" is being determined, it's closer to "ship the fscker or be canned".
So you agree, then, that you can't rely on empirical verification to determine whether or not one's interpretation of the language specification is accurate?
"Specification"? Correct, that requires reading minds, which I don't claim to have a model for. "Official specification" is an in-the-head social construct, and is perhaps relative. However, one CAN rely on empirical info to model an interpreter's run-time behavior as-is (assuming it doesn't morph during testing period), which is probably more important to a rank-and-file developer anyhow than the "official specification".
See also: NoTypeCanon