From TypesAndAssociations.
This TOPIC IS FULL, triggering a Perl memory error. Use ValueExistenceProofTwo for additions.
Abbreviations Used Within:
LIAS = "Like I already stated"
This is a multiple-choice exam. Please answer all questions.
Question 1: In a statically-typed language, a variable contains or stores a __________ which can be changed by assignment.
(a) Variable (b) Type (c) Value (d) Anonymous variable (e) Tag (f) None of the above
Question 2: The result of evaluating the expression 7 + b * 3 is a _________.
(a) Anonymous variable (b) Value (c) Variable (d) Type (e) Tag (f) None of the above
Question 3: The literals 245, "403", 24.04, and -15 are all examples of _________.
(a) Values (b) Variables (c) Types (d) Tags (e) Anonymous variables (f) None of the above
Question 4: In the statement a = foo(), function foo() returns a _____________.
(a) Tag (b) Type (c) Anonymous variable (d) Value (e) Variable (f) None of the above
Answers are on the other side of this page.
I just see a coffee stain.
Look under it.
It's caffeinated turtles all the way down.
Yes, and the answers are under the turtles.
You don't want to know what's under the turtles. Trust me.
Completed exam paper from Subject T:
- Question 1: I'm not modelling statically-typed languages and refuse to answer.
- Question 2: In colloquial-speak, I'd answer b and d. (Actually, "result" is sometimes also used [added])
- Question 3: f. They are literals.
- Question 4: In colloquial-speak, I'd answer b and d. -t
Assessors feedback:
Since you refuse to answer Question 1, I shall provide a supplementary question:
Question 1b: In a dynamically-typed language, a variable contains or stores a __________ which can be changed by assignment.
(a) Variable (b) Type (c) Value (d) Anonymous variable (e) None of the above
I don't know what "in colloquial-speak" means, but it seems you do acknowledge the existence of values. That's good, because the entirety of computing and mathematics is dependent on them.
- My model has "values".
- Really? It didn't before.
- You actually missed it?
- I seem to recall something called an "anonymous variable".
- Yes, but that's a different part set/scope than "values".
- Really? So you have both values and anonymous variables?
- Don't you remember <var name="foo" type_tag="number" value="123"> ?
- Variables in dynamic languages don't have types.
- Prove it!
- There is no TypeChecking performed on variable assignments in dynamically-typed languages. Other than assignment, any reference to a variable retrieves its value. Hence, no type needs to be associated with a variable, because the only operations we ever perform on a variable are assigning a value to it, or retrieving a value from it. Any TypeChecking is done on the value. Therefore, it's unlikely that any interpreter developer would associate a type reference with a variable because it would never be used. Types are associated with variables in statically-typed languages, because TypeChecking is performed on assignment to ensure that the type of the value being assigned is compatible with the type of the variable.
- Just because most operators don't "use" the type reference does not mean type references do not exist. You are confusing the nature of the object with the nature of the handler/processor of the object. And for overloaded operators, dynamic languages often use the type reference to choose among the processing options rather than outright reject it, as we often find in static langs. It's TypeChecking, but with different "handling". Witness the different output in JavaScript "a=2;alert(a+a);" versus "a='2';alert(a+a);" versus "a='2';a=2;alert(a+a);".
- Please read what I wrote. Your example is dispatching on the value in the variable, not the variable itself.
- In your model.
- You asked me to prove that variables in dynamic languages don't have types, so I did, and I refer to how Javascript (and similar languages) work to address your example. It has nothing to do with anyone's model.
- I'm not following the exact steps of your alleged proof. You need to show how you are measuring existence or non-existence. Note that something "non-printable" (explicitly) is tracking quote-ness in the JS examples.
- What's "tracking quote-ness" -- in other words, the thing that possesses a type -- is the value stored in the variable, not the variable itself. Again, the only operations performed on a variable are (a) assignment -- i.e., storing a value in a variable -- which does not involve types (in dynamically-typed languages), and (b) retrieval of a value from a variable, which does not involve types. Hence, a type does not need to be associated with a variable, because there is no operation on a variable makes reference to a type. So, it's highly unlikely that types are associated with variables -- though not categorically, absolutely certain, because some misguided developer out there might associate a type with a variable for no reason at all. However, we know from accounts of developers of interpreters, and from looking at interpreter source code, that types are not associated with variables in dynamically-typed languages.
- Too many vague words in there. I cannot evaluate your claim. It's back to square 0.1.
- Short form: In dynamically-typed languages, the only operations we perform on variables are: assign a value to a variable; retrieve a value from a variable. Neither operation involves types. So, variables don't need types or type references. Type-oriented operations deal only with values.
- Even in your model, "values" contain type info for D1 languages. Thus, if variables contain values in your model, then they also contain types, per rules of nested possession. If box X contains egg Y, and egg Y contains ring Z, then box X contains ring Z. I just dispense with the nesting because it adds nothing for the stated goals.
- You asked for a proof that variables in dynamically-typed languages don't have types. I provided it. What your model does, or does not do, is irrelevant here.
- Your "proof" is flawed for reasons you failed to address (box X etc.) Unless, you are playing word-games with "have", which is a damned vague word.
- Let T be a type, V a value, R a variable, and P a value's representation or encoding. In dynamically-typed languages, V = (P, T) and R = (V). R != (V, T) because no operation on R needs to make reference to any T, but operations on V make reference to V(T). If you wish to claim that we can transitively obtain V(T) via R(V), i.e., R gives V gives T, that's fine, but it's not the same thing as R = (V, T).
- For one, V=(P,T) is YOUR specific model. I want a universal truth, not a truth just within your model only. That should be obvious and I don't know why I have to explicitly ask. Further, your claim was "variables in dynamic languages don't have types". I interpret "has" (have) to mean "contains". Your own XML model is <var name="..."><value type="..." repr="..."/></var>. This structure has "type=" INSIDE of the VAR tag. The level of nesting does not affect "contains". If the type attribute was 30 tags deep inside the VAR tag, it still satisfies "contains". That is consistent with "contains", as I understand English. We are not proving equivalency, only has-ness.
- V=(P,T) is mine, and every popular imperative programming language. Your model is, or was, that <var name="..." type="..." value="..."/> is the sole value-carrying structure, is/was it not? In order words, your R=(P,T). That's quite different from <var name="..."><value type="..." repr="..."/></var> or R=(V), V=(P,T). That means you're using R where I (and everyone else) use V, i.e., you use "variable" to refer to what the rest of us call "value".
- For one, I ignore actual implementation when comparing models. Second, I was talking about YOUR model. Variables contain types (or type references) in YOUR model. Nested-ness does not disqualify contain-ness.
- If you ignore implementation, then what is it that you are modelling? Just I/O? Even then, variables do not need to contain types (or type references). In dynamically-typed languages, in MY model or any popular imperative programming language that I'm aware of, variables certainly don't contain types. Variables contain values. Values contain types. R=(V), V=(P,T). "Nestedness" need not disqualify "contain-ness", but that's not what you meant when you defined your variables as R=(P,T), is it?
- What kind of shenanigans are going on here? If X contains Y, and Y contains Z, then it's rational to say X contains Z.
- There is no interpretation of dynamically-typed languages that describes variables as being typed because they hold typed values. Dynamically-typed languages are sometimes called "value typed" precisely because values have types and variables do not. There is no mechanism in any popular dynamically-typed imperative programming language interpreter that accesses a value's type through the variable that holds it, because there's no reason to do so. So, even though you could say that if a variable contains a typed value then a variable contains a type, it would be meaningless to do so.
- And the nestedness is an artifact of your model, not an observable trait of I/O. (At least using nestedness is not the only model that matches I/O).
- You asked me to prove that variables in dynamically-typed languages don't have types, so I did. This has nothing to do with models or their "nestedness". You're attempting to stretch the notion of "have types" to include some meaningless, purposeless transitive relationship. I presume that's done in an attempt to defend your model's needless association of types with variables?
- No you did NOT prove it. You used equivalency as your "proof" when equivalency is not the proper test. And it's hard to "stretch the meaning of 'have types'" when it's already pretty damned vague and open ended. "Have" and "contains" either mean pretty much the same thing for computer structures, or overlap in English-land. Deal with it. ("Have" is a goofy word to use to describe software parts. It's a human-world-centric word that does not map well to inanimate things. It can mean "legally possess", "in the care of", or "carry" or "grasp". But interpreter parts don't carry and grasp. They don't have arms and teeth. "Contains" is the closest analogy in actual usage I can find for computerized objects or data structures. For example, "table X has the data I'm looking for", which is considered the same as saying "table X contains the data I'm looking for".)
- {Why are you ignoring the definition "To possess as a characteristic, quality, or function", and where did he use equivalency?}
- What does "possess" mean for software objects? I interpret it as equivalent to "contain" and that's consistent with common usage of it, per table examples. And what law of English or the universe makes nested-ness allegedly "de-possess" such?
- {It means it's an attribute of that object.}
- Only the first level of the object or structure? That's not how I'd interpret the phrase. "X has a Y" to me would include ALL levels of nestedness, barring an explicit level limiting condition. And this is consistent with real-world usage of "has", as seen in the ring in egg in box example already given, with "has" instead of "contain".
- That's what "an attribute of that object" usually means -- only the first level. It's curious that you never mentioned transitive inclusion of properties until the only way you could possibly sustain anything even remotely like a criticism of my proof is to invent some notion of transitive inclusion. Notably, however, no language accesses the type of a value in a variable via the variable, so your notion is moot.
- "That" is not automatically implied. That's your (misleading) restatement. The last sentence is based only on your model, not on I/O observation. And why would I need to mention that earlier? To me it's plain obvious that "has" considers all nested parts. I only made that assumption explicit when you appeared to kick common sense and normal English usage in the groin. Communicating with you guys is like pulling teeth while water-skiing blindfolded.
- Of course it's based on I/O observation; I've been programming for decades and sometimes even look at the output. It's also based on something even better -- knowledge about both language semantics and language implementation internals. And it's irrelevant anyway -- no language needs mechanisms to access a value's type via the variable that contains it. Your claims are obviously a desperate attempt to preserve some semblance of a valid position in this debate, but it's obvious you have none.
- What I/O experiments "prove" it? Your allegedly vast knowledge of the guts appears to be clouding your objectivity and you're mistaking the map for the territory. Note that you are correct that such is not "needed" in the strict sense, but it's WaterbedTheory taking place.
- What I/O experiments "prove" that the type of a value is obtained via the variable that contains it? Your lack of knowledge of the guts appears to be (mis)leading you to add needless complexity.
- I never said my model was the only working model. Both gears and computer chips can run planetarium simulations, with various trade-offs associated with each choice. My choice of internal model part representations may indeed be UsefulLies (assuming actual implementation is even the "truth"). As far as complexity, the trade-offs are discussed later in this topic.
- I see no rationale for a model of dynamically-typed where R=(P,T) and there is no apparent V, instead of the conventional model of R=(V), V=(P,T). Your claims of simplicity are not compelling, and I think your model's deviation from typical implementations will only lead to confusion. In short, there is no justification for a UsefulLie; your model presents a UselessLie.
- You are wrong. My model decisions are consistent with the stated goals, stated priorities, and stated assumptions about the target audience. The decisions are not random. Mirroring actual interpreter design is not a priority consistent with the givens I laid out. The target audience is NOT building production interpreters. I don't know why you are anal about matching production interpreter design (other than maybe teaching job security). Get over it! The need is to predict type-related behavior, NOT to build a production interpreter. My trade-off decisions are optimized for the first for what should be dirt-obvious reasons. Your obsession is getting very annoying.
- It's not about matching interpreter design per se; it's about matching language semantics. Your model does not match popular imperative programming language semantics.
- Bull! The existing type-related documentation is vague and confusing and inconsistent, partly because English alone is a poor fit for describing types.
- What does your response have to do with what I wrote? Your model does not match popular imperative programming language semantics but mine does. How is that related to existing documentation?
- You are talking about language documentation, no? If not, I don't know how you are measuring/testing "semantics". Your semantics for semantics is vague.
- I am not talking about language documentation. Language semantics are orthogonal to language documentation.
- Please demonstrate the alleged problem in explicit and non-vague details. Nothing in I/O has clear-cut "values"; we've been over that already. Your dead horse has been beaten into powder already such that the neighbors are complaining about the dust and served you an air hazard warrant.
- I/O doesn't have values? What does output like 123 or "fish" represent if not a numeric value or a string value? The problem is that a statement like (for example) "a = b + 3" has a conventional semantic interpretation -- the numeric value represented by the literal '3' and the value retrieved from variable 'b' are added by the '+' operator, and the resulting value is stored in variable 'a' -- that your model describes in a novel way, using terms like "anonymous variable" and "hidden names" and "tag" and "intermediate values". How does that make languages easier to understand?
- Output "values" lack your "type" indicator: they are just strings of characters, and thus are closer to my model's use of values. Thank you for helping my point. Like I keep saying, "value" is overloaded in practice, and you just overloaded all over your panties. They could also be called "literals".
- Please read what I wrote. I wrote, "what does output like 123 or "fish" represent ...?" Indeed, 123 and "fish" are literals, but literals always denote values.
- And "a" in my model is NOT called an "anonymous variable". You screwed it up there. Further, we can say "the result is stored in 'a'"; we don't need the word "value" in that sentence; it's superfluous. You are inventing unnecessary problems.
- Please read what I wrote. I did not write that "'a' in your model is called an 'anonymous variable'". As I wrote before, your model describes conventional semantics in a novel way using terms like "anonymous variable" and "hidden names" and "tag" and "intermediate values". I don't know where "anonymous variable" applies or where it does not, or whether it does or does not apply here, but you've used it on a number of occasions -- usually without explanation. How does that make languages easier to understand? Furthermore, what "unnecessary problem" am I inventing by pointing out that "the resulting value is stored in variable 'a'"? The result of an operator is a value, and in this case it is stored in variable 'a'.
- I'm trying to explain stuff to you the best I know how. The model itself relies on "mechanical" processes to explain action and not so heavily on English. Don't confuse the two. Any interpreter or interpreter-like device is going to have SOME internal vocabulary. The more exacting the model, the more internal vocab it will have. Witness "representation" in yours. And you've ignored my point that we don't need the word "value" to explain your example. Does that mean it can be ignored now?
- We don't need "value"? Of course we do! That's precisely my point about your model not matching popular imperative programming language semantics. The semantics of popular imperative programming languages depend on values. "Values" are what even beginning programmers are told that literals represent, operators return, expressions evaluate to, and variables store.
- I'm not preventing such in my model. They can talk about literals "representing values" just like they did before and it doesn't change anything. It was overloaded before in colloquial-land and it will be overloaded after. Nothing really changes.
- It's not just literals. You also have to deal with what operators return, expressions evaluate to, and variables store.
- The first two can be called "results". "Variable" has a specific representation in the model. In my model it does not matter much because "intermediate values" are converted into dummy/internal explicit variables to improve reference-ability and dissect-ability. (Remember, good models generally uniquely identify the parts of the model rather than rely on implication and pronouns.) I've seen actual interpreters that also do this, by the way, so it does happen in implementation-land. But that's mostly a side issue.
- How do "results" differ from "value", other than being obvious shorthand for "resulting value"? And your model adds an artificial construct, "dummy/internal explicit variables"? How do you map those to I/O? How do you map those to any familiar programming concept?
- It's only short-hand if you pretend it's short-hand. Asking how they map the I/O is similar to asking how gears in an orrery map to the output (planet position forecasts). Your model similarly has some "awkward" or model-specific parts, such as "representation".
- Data representation is a fundamental element of empirical ComputerScience. "Dummy/internal explicit variables", on the other hand, are highly model-specific.
- External or internal? There can be multiple internal models that manifest the same external behavior. Your "value" XML model is internal to the model, not an objective truism. To borrow your words, it's "highly model-specific".
- I don't know what you mean by my "'value' XML model", but values are an "objective truism". Read any introductory ComputerScience, computer architecture, or algebra textbook.
- Your "value" XML model is where you define a value in terms of XML "<value.../>". As far as "any introductory text", yes yes, there is a general notion of "value", but like "types" it's vague and/or overloaded. Any rigor would rely on a specific model, which may or may not closely match any given individual's notion.
- Can you provide evidence that "value" is vague and overloaded?
- I have multiple times. I guess I'll have to do it yet again. In practice it's often used interchangeably with "literal". And even you said that output has/displays values. However, your "type" attribute is not carried through to the output, only your "representation" (or something similar) is carried to output. Thus, it's something different in output per internals of your model since one has the "type" attribute (internal "value") and one lacks it (I/O "values"). Stated another way, it's a two-part tuple internally but a one-part tuple (or non-tuple) in "output". -t
- Literals denote values, and values can be expressed as literals. There's nothing vague or overloaded about it, and the fact that literals are sometimes called values is merely an indication of their close relationship.
- You didn't address my concerns. We are talking about your characterization of output. And I agree that "literals" and "values" are related and/or are similar "notions", but that fact doesn't contradict anything I've said. You didn't prove that values are something objectively observable and are not model-specific. That something external may resemble a "value" is not sufficient. It seems you got yourself caught in a corner. If you claim output "has values", then those values don't fit (have diff sub-parts than) your internal value structure, which demonstrates overloading. If output (I/O) doesn't have values, then it's something only internal to your model. I'm curious to see how you are going to dig yourself out of your logic hole. And, "denote" is not very rigorous.
- "Denote" is precise. In means a literal is a sequence of characters that always represents a value. In popular imperative programming languages, the relationship between "literal" and "value" is bijective, to the point that we can informally use "value" as a synonym for "literal", because a literal always represents a value (and a value can always be represented by a literal.) In a given language, a literal's type can always be determined in correct code; indeed, values of canonical types can usually be identified for conversion to values during LexicalAnalysis. I'm not sure what your "concerns" are such that they need to be addressed, nor do I see any "logic hole" here.
- You use "represents" to define "denote". "Represent" is also vague. Using vague words to define other vague words is not very helpful here. "Represents" is a human head notion. I've already agreed that "literal" is clearly defined in most common languages because the syntax definitions are rigorous so that machines can parse and process the parts and spit out error messages in English about problem parts. But that precision doesn't necessarily translate to I/O (something external to the model) other than the input source code. Fuzzy X being similar to clear Y does not make X clear (X="value", Y="literal").
- I'm using "represents" in its usual English sense. If you'd prefer I not use it, that's fine. A literal is a sequence of characters that when parsed become a value. In short, a literal evaluates to a value. Etc. However, "represents" is the familiar term. See, for example, http://en.wikipedia.org/wiki/Literal_(computer_programming)
- I never claimed "represents" wasn't common; I'm only claiming you haven't shown it's objectively and rigorously defined. It becomes a "value" in your particular model, yes. But that's not the right answer.
- There is a model where literals don't become values? That would be an oddly useless model, because you wouldn't be able to do any calculations, like add numbers, or divide them. Calculations, after all, are done on values.
- I use "anonymous variables" in place of where you use "values" in your model because "value" is already used for another purpose. (Unlike colloquial-land, I don't wish to overload terms in the model.) I know you don't like that, but I won't reinvent that debate yet again here.
- If you're using the term "anonymous variables" in place of "values", then I presume you recognise that values are immutable. So, by extension, I presume you intend that "anonymous variables" be immutable, yes? Then isn't it contradictory to call something that is immutable, "variable"?
- It's only immutable because the programmer can't change it. It's the same structure, but nobody's allowed into the room. "Anonymous constant" may be technically more fitting, but then it doesn't convey the fact that the structure is the same. See below about vocab trade-offs.
- It doesn't matter why it's immutable; it's sufficient to note that it is immutable. Since a constant is a name associated with an immutable value, "anonymous" and "constant" are contradictory.
- Please elaborate on the last sentence.
- "Anonymous" means nameless. "Constant" (in a programming language context) usually means a named value. Nameless != named. Hence, a contradiction.
- Constant usually means a named value? That's not very helpful. Not on Tuesdays? Call them "hidden variables" or "internal variables" then. It's not a big deal because they are rarely actually used in my model directly. They are pretty much a side note.
- When the term "constant" is used properly in the context of programming languages, it always means a named value. If you call them "hidden variables" or "internal variables", then you're back to the contradiction of calling them "variables" when they're immutable. As for being "pretty much a 'side note'", I would expect them to show up every time an operator returns a result, every time source code contains a literal, whenever an expression is evaluated, whenever an operator processes its operands, and whenever you assign to a variable. In fact, when aren't you doing those things?
- Re: '"constant"...always means a named value' - That's your opinion, not a universal truth.
- Whilst it would be rather peculiar to describe any word usage as being "a universal truth" -- given that in every field terminology is sometimes used inconsistently or incorrectly -- in ComputerScience and particularly in programming, a constant is an identifier associated with an immutable value. In short, a programming language "constant" means a named value. Look it up.
- Like I said, value is a general notion. Any "precision" would be per language and/or model. I have no problem using it as a general notion in colloquial-land, but if we apply rigor to it for a particular model or implementation, then we are faced with trade-offs because it's an overloaded concept, as already explained. You keep jumping between precision land and colloquial land. Both lands have different needs and tradeoffs. Please be clear about the context of the terms you use. -t
- Whether "value" is a general notion or not -- and I would argue it's a sufficiently rigorous "notion" to identify reliably -- "constant" means a named value. Use of the terms "anonymous variable" or "anonymous constant" are confusing and misleading. Their use will not add clarity, and will probably reduce it.
- Note that one approach around the anonymity problem is to use "object" instead of "variable" because "object" is probably vague enough to avoid collisions with colloquial-land. But since the vast majority of the kit work would likely be with variables, I'm hesitant to do that.
- A good thing you're hesitant -- use of the term "object" moves even further from conventional terminology, with no reasonable justification.
- OOP is as vague as "types" and "values", at least. How about "thing". Would you also bitch about "thing"?
- You don't have to use overloaded terms. You can use "value" to refer to values, and "representation" to refer to representations.
- "Values" don't objectively exist in a stable form, and "representation" is way too vague. Everything on this page is a "representation".
- Nonsense. Fire up a debugger if you want to see values and representations in their native environment. In fact, it's pretty difficult to make effective sense of a debugger unless you understand both, which is an excellent reason to include values and representations in any model of language TypeSystem behaviour.
- Like I've said many times, I don't define a language by what debuggers show nor actual implementation. Both can change and still run existing programs. Plus it's not clear if what is being shown in the debugger is clearly and truly "values" in a consistent way. Debuggers are in informal convenience. Why do you keep bringing up these sub-debates over and over? I'm tired of repeating my replies. If you mention debuggers ever again on type topics, I'll randomly kill 7 kittens with rusty knife in your name.
- Why do you feel the need to threaten cats? Debuggers are a good indicator of language reality -- they provide a window into the language semantics -- which is ultimately what any good model should address. The debugger is clearly and truly showing "values", whether it's showing the contents of variables in a source debugger view, or the contents of registers and memory locations in a machine debugger view.
- I don't see anything new about debuggers here that changes my stance on them. My prior replies on the subject still stand. If you disagree, so be it. And I don't dispute that output shows "values", or at least agree they closely reflect "values" (per our two models) because both our models use parts called "values" that can shape output. That can happen because it's a vague/overloaded concept. A value-free model is also possible. Output is output; any meaning given to it is only in the human head or relative to a given model or arbitrary parsing rule. But I am not here to model human heads in terms of vocabulary determination. I'm here to forecast output based on input sources and NOT to label parts of specific models or implementations in a universal sense. I'll try to fit existing vocabulary notions, but only where it doesn't conflict with the other stated goals per their given ranking.
- If your model requires new vocabulary in order to make sense -- or if it omits common vocabulary in order to make a point -- then you're putting a burden of additional learning on your reader in order to apply your model. Do you really think the weak programmers to whom it is targeted are willing to learn new terminology, unique to your model, just to use your model?
- What is omitted? If I use "representation" instead, then it's the same problem in a different spot. Like I've said before that isn't sinking into your stubborn forgetful skull, I'd rather put the ugliness in the parts of the model that are used 5% of the time instead of the parts used 95% of the time. There is no free lunch, but we can minimize the damage caused by inevitable compromises. You should be able to anticipate such replies by now. You appear to not be listening. (The reason the compromise is inevitable is because in colloquial-land, people often consolidate similar concept into single terms so that they don't have to track subtle differences in the language mapping portions of their mind. However, a more rigorous model needs more concrete parts and the overloading has to (or should) be cleaned up, creating a musical-chairs-like situation in which some of colloquial vocabulary has to be excluded or used somewhat different than usual.)
- What is omitted is "value". Your model is predicated on the notion that conventional explanations are vague, overlapping and overloaded, and so you choose unfamiliar terminology. Yet, you have no mapping of unfamiliar terminology to familiar terminology, so it appears you expect users of your model to examine their I/O and adopt your terminology. How is that going to happen, when they're taught programming using conventional terminology and there is no such mapping?
- I explained that already. "Value" is overloaded in colloquial-land. I don't want to use it in an overloaded sense because I want a cleaner model such that I have to ditch one or the other. You similarly had to toss one, and thus got the awkward "representation" out of it.
- "Representation" is fundamental to understanding how operators work in the three TypeSystemCategoriesInImperativeLanguages, so I think it's worth including. If you don't want to use "value" in an "overloaded sense", then clearly define the one sense you intend to use. The apparent "overloaded" use of "value" only comes from casual conflation of "literal" and "value", and that's trivially (and usefully) clarified. Avoiding the issue entirely by replacing "value" with unfamiliar terms does not improve clarity, and does nothing to reduce the supposed confusion over "value".
- Only in YOUR model is it "fundamental". You are again mistaking your pet model for the center of the universe. It's a bad habit of yours. I don't think you understand what "objective" means. And I'm not going to "clearly define" anything anymore in English alone because it appears to be a very difficult goal that nobody has perfected. Thus, I'm only going to define it WITHIN my model so that I don't have to ride the Center of the Universe English Pony, which everybody, including you, keep falling off of face first. It's as if there is LSD in types in that everybody who tries to write about them ends up sounding like Lewis Carrol on acid. Instead of Woodstock, it's Typestock. -t
- No, "representation" is fundamental to ComputerScience. My "model" simply reflects that unavoidable truth. As for the failings of English (or any human language), yes, that is a problem. It is for precisely that reason that mathematical notations are preferred.
- Without a clear definition of "representation", I cannot evaluate your claim. It's not used very often in programmer colloquial-land in my experience, and thus doesn't contribute to understanding via use of existing common notions/usage (if that was your goal).
- A "representation" is a string of bits (assuming a machine architecture that uses bits) that internally specifies a value. Understanding that a value is a relationship between a representation and a type is fundamental to fully understanding how (for example) low-level, built-in operators like +, -, etc., work.
- Vague claims about vague words.
- How does that counter my "claim"? We'll have to assume that you don't have a counter-argument, and are only commenting because you don't like the fact that I'm right. You do know how operators like +, -, etc., work, don't you?
- I cannot counter vague claims. Can you prove that zarps don't have flibbles? And define "work" for this context. Match I/O? I can do that without your smelly model.
- I don't know what you feel is vague. As for "work", how does an operator like '+' produce an integer value given two integer values?
- Per given model or implementation. The only objective part is the I/O (including source code). The in-between is either in the human head or in the particular model/implementation, and there are different ways to model/implement the same language I/O behavior. This means the in-between (insides) is not objective in a universal sense. Variables and literals are defined externally in terms of the source code syntax rules. "Value" is usually not, at least not in a rigorous sense. (Mere mention in a language manual is not necessarily the same as rigor.)
- Defining language behaviour only in terms of external I/O -- without reference to language semantics -- makes source code indecipherable, like a VoynichManuscript of symbols sans meaning or purpose. Source code is only meaningful by considering semantics, and when we consider semantics, values become fundamental and obvious.
- Semantics are in the head. I'm not modelling heads, at least not yours. If talking about source code, we can often use "result" (of expression/operation) in place of where you use "value". It doesn't really matter because "value" is overloaded in colloquial speak and I don't expect that to change in colloquial speak. Like I keep saying, for a formal model, one cannot keep the all the overloading if they want a clear model, and have to trim usage. Your awkward "representation" is an example of this. Most would call it a "value" if the term was available, not "representation". But you used up that term for another part (or grouping of parts). English is fuzzy and overloaded, and this causes headaches if we try to create models that attempt to use terms from colloquial-speak in clear and non-overloaded ways. That's the real problem here, NOT MY MODEL. I'm not stopping you or anybody from using variations of "value" in colloquial-speak. Just don't have a fit when we cannot pound a square peg into a round hole when we move to a formal model.
- For a formal model, it is sufficient to clearly define your terminology. In the case of overloaded terminology, it is sufficient to unambiguously indicate which use you are employing. As for language semantics, they are not "in the head". They are in the machine, particularly in terms of what values result from calculations and how state changes. The purpose served by a statement like "a = 3;" can be considered "in the head", but its semantics -- the literal 3 denotes an integer value which is assigned to variable 'a' -- is in the machine.
- "Denotes" is a vague word. In my model it could simply be illustrated; we don't have to worry about what damned English is used to describe it. And you cannot prove that is "in the machine" other than a particular model/implementation. You haven't proven that all possible models that produce matching output must match your English description.
- "Denotes" is vague? Really? Source code denotes a program. Is that vague? Obviously, I can trivially prove what is "in the machine" by using a debugger or examining the source code for the interpreter or compiler, but this isn't necessary: We have de-facto standard semantics that apply to all popular imperative programming languages -- no doubt owing to their origins in Algol -- that are referenced in their manuals and described in innumerable on-line and printed language references, tutorials, guides, forum discussions, and textbooks.
- Yes, "denotes" is vague; at least too vague to turn into a imperative algorithm. And a specific implementation is NOT proof of universality; it only proves that one model "works", not that it's universal. And in practice "value" is overloaded, as already described. I even caught YOU using it in an overloaded way. You denied it, but your weaseling is a fail. They should make a movie about you: "Dances with Words"
- "Value" is not overloaded in programming language semantics. The closest "value" comes to being overloaded is that we sometimes informally use "value" when we mean "literal", because a literal always denotes -- i.e., means, represents, and turns into -- a value. In a formal description, it is trivially easy to distinguish "value" from "literal" if necessary.
- The only "formal description" of values I see are model-specific, not externally observable traits of langs. (Although, it is often hard to tell whether its usage is intended to be formal or colloquial.)'
- It doesn't matter whether values are externally observable or not -- though literals are certainly externally observable, and literals always denote values -- but whether "value" is intended to be formal or colloquial, it's an inescapable consequence (a "trait", if you will) of ComputerScience and computer architecture. You inevitably must deal with values in order to answer questions like "what does a variable contain?", "what does an expression evaluate to?", "what does a function return?", "what does a literal represent?" It doesn't matter whether it's formal or colloquial, the answer to these questions is something that must be modelled in order for a model to be complete and useful, and given that we already have an answer to these questions -- value -- it seems reasonable to use it. Distinguishing "value" from "literal" -- which is the only contentious point, as far as I can tell -- is trivial.
- I cannot comment on all possible models, but I do agree that known models need SOMETHING to fulfill the role you talk about. The issue is what to name such a thing in the model, not the need for it itself. Because colloquial terms are overloaded, we have to make a compromise in terms of the colloquial vocab we borrow for a given model. There is no free lunch, there is no free lunch, there is no free lunch, we are still in Kansas, Dorthy. The real debate is about how to weigh the various tradeoff decisions. -t
- Call it a "value", and clearly distinguish it from "literal". That shouldn't be difficult.
- But I don't want something screwy like "representation". I don't want crud like that in my model. Fuck up your model, I don't care, just don't fuck mine.
- Did I write that you should call it a "representation"? And is your vulgarity necessary?
- "Value" is the best fit for the given part of my model. "Representation" is ugly, long, and confusing. If you can find a better fit, I'll consider it. And I am not going to complicate the model just to accommodate informal terms. That's usually not a good reason to add complexity.
- Again, did I write that you must call something a "representation"? If you wish to include a "representation" (or some equivalent and recognised term, such as "encoding") I certainly encourage it, but as I've stated several times, it's not absolutely necessary. You can effectively skip over it by defining a value as <value type="int">3983</value>, for example. That's perfectly reasonable, and it treats a value more as an abstraction (particularly, an expression that can't be evaluated any further) than in concrete terms, such as the pairing of a representation and a type.
- I want to be able to refer to the attribute by name in the descriptions, samples, and documentation. Naming and referencing the parts of a model/engine unambiguously is a key trait of good documentation. This is yet another reason why tuples can be difficult unless you have enough "rows" to be able to group them under column headings. You seem to want tradition to decide such issues instead of here-and-now model design and naming.
- What does this threadlet have to do with tradition, column headings (?!), or naming and referencing? Weren't we discussing whether to include a value's representation or not?
- What?
- You wrote, "I do agree that known models need SOMETHING to fulfill the role you talk about. The issue is what to name such a thing in the model, not the need for it itself." I suggested you call it a "value" and clearly distinguish it from "literal", since those are the usual terms for that "SOMETHING". How did you wind up talking about tuples, tradition, column headings, or naming and referencing? Those are all things you've mentioned. I didn't mention them.
- I have described and explained multiple times the naming and parts design tradeoffs such entails and the reasons why I made the choices I did. A model can work just fine using hidden variables without a "value" middle-man.
- That simply means you've arbitrarily renamed "value" to your PrivateLanguage's "hidden variable".
- No, because I don't have any feature one-to-one equivalent to your "value".
- Really? Then what is it that functions return, variables contain, expressions evaluate to, and literals denote?
- That appears to vary per model, and that doesn't bother me. Objectively we ONLY have I/O (including source code as "input"). There are multiple different techniques to transform the I into the O, and the internal transformation parts can be different between models/approaches. That's just the way it is.
- I don't know what models are included in the set that "appears to vary per model", but do I know that popular imperative programming languages all have the same semantics within their given category, i.e., Category S, Category D1 and Category D2, as described on TypeSystemCategoriesInImperativeLanguages. That's precisely what enables programmers to easily move from one popular imperative programming language to another, with only some effort to learn differing syntax and library routines.
- You are presuming your favorite head model matches everybody else's head model. My experience with developers differs, as I have stated many times. Another AnecdoteImpasse. When I started using ColdFusion, a D2 language, I didn't notice any difference between D1 languages immediately. It was over time that subtle little things popped up when I started caring, and did explicit experiments to tease out its model. I'm more curious about such things than typical developers I encounter. They'll usually just spot-fix any type-related problem and move on to the next task. They usually focus more on learning the latest fad language and frameworks to get richer. Economically, that's probably smarter, but I don't always follow money. They treat programming as junction (stop-over), not a career, which is probably wise given the history of the profession.
- This isn't about "head models" (whatever they are) but about correct and clear understanding of popular imperative programming language semantics, in which (for example) values are assigned to variables and expressions evaluate to values. I know of no popular imperative programming language where this is not the case.
- Yes, correct and clear is a lovely goal. Achieving it is the hard part. Most languages do NOT clearly define "value". I'm not going to complicate the key part of my model to cater to a relatively weak general notion. My balancing is done in a rational way based on my assumptions of typical developer WetWare and the relatively narrow goal of the model. If you wish to change my mind, you either have to provide more evidence that your view of their WetWare is more accurate (beyond personal anecdotes), or convince me that I am not "processing" my givens (assumptions) correctly (as stated) to tune the model trade-offs. Repeating your point of view over and over won't do it.
- Most language documentation doesn't bother to "clearly define 'value'" because it's consistent across programming languages and consistent with use in mathematics, engineering and science in general. However, you can't successfully teach the first week of an "introduction to programming" class -- which typically includes literals, expressions, and often variables -- without incorporating values. From that point onward, understanding of values is assumed.
- As a fuzzy notion, yes. And you appear to be back-peddling. First it was allegedly clearly in the documentation, and now it's "just understood".
- I'm not clear what you feel I'm back-pedalling about. Perhaps it's because I've pointed out that "value" is certainly documented -- see http://docs.python.org/2/reference/datamodel.html#objects-values-and-types for example -- but not necessarily defined? Note how the Python reference makes use of the term value in a quite rigorous manner, but does not define it. A definition was apparently considered unnecessary. That doesn't make the notion "fuzzy", only that it's assumed to be understood.
- It's so clear that nobody needs to define it! Haaaaa ha ha haaaaa haaaa ha ha ha haaaaaaa!
- Haaah hah! Silly boy -- language manuals don't need to define it, because it's the same in every language. They don't need to define "bit" and "byte" and "integer" and "number" and "variable", either. Look in any introductory ComputerScience or computer architecture textbook for a definition of "value".
- LIAS, some of those are defined via specific language syntax, and some are general notions or idioms that we "have a feel for" from experience. That does NOT make them rigorous nor canonical. (Some programers have no feel for bits and bytes, by the way, due to their "shortcut" education. It's good knowledge, but not necessary knowledge for some niches.)
- Values aren't "defined via specific language syntax", and in fact have nothing to do with syntax. Literals are defined via specific language syntax, and are therefore rigorously and unambiguously defined. Whether we're talking about literals or values, I know of no case where ambiguity is so significant as to inhibit clear understanding. You seem to be quibbling.
- You seem to be quibbling. What's so bad about allowing 2 kinds of models to exist? Values are not defined by syntax, but who knows where the hell they ARE defined. Planet Kolob? Your "know of no case" sentence is merely yet another unverified anecdote about field programmer head notions.
- Values are defined in any introductory ComputerScience or computer architecture textbook. Almost all language references assume you've read some of these, and have already learned to program and understand the terminology. As for "I know of no case", it's true. I know of no case where ambiguity is so significant as to inhibit clear understanding. That is my experience. If you wish, you can counter it by relating your experience, but please be specific -- don't simply write, "That's not my experience." I have no objection to allowing many models, but I have specific objections to your model as I've described elsewhere.
- That's a double-standard: I have to give names, addresses, and dates, but YOU don't? (And existing writing on "types" and "values" is vague and/or poor.)
- Exceptional claims require exceptional evidence. Your exceptional claim is that you can model language I/O using your own semantics, rather than the conventional semantics upon which popular imperative language implementations are based. Furthermore, you claim there are no "conventional semantics." That's both bold and exceptional. Therefore, the onus is on you to prove your claims.
- You've neither shown you use "conventional semantics" nor that such "conventional semantics" is measurable and consistent. And how is "extraordinary" measured? Do you have an Extraordatometer in your garage? You linguistic sluts sure like to sling language around.
- So if there are no measurable and consistent "conventional semantics", how can your model describe any more than a single language?
- It's more of a model-making kit. Kits will target a given set of conventions or common occurrences of usage, but don't necessarily limit you against adding to the parts or skipping parts. An RC airplane kit may include a motor, a propeller, a "raw" wooden body and wings, 3 landing wheels, and an RC control. It's designed around typical or stated usage. However, if somebody wants to make a boat or helicopter or a hybrid vehicle or a toddler-spanking-machine out of it, nothing stops them.
- What do you mean by "it's more of a model-making kit"? You've described structures (using XML) that appear intended to be model-like, and appear to be intended to match language categories. If it were a model-making kit, I would expect it to consist of rules for mapping observations to structure generators, or some such.}
- I don't see such a matcher in your model. Note that when complete, experiments combined with TypeHandlingGrid can be used to match observations with implementations. A fancier version would take the grid as the input to map type matching, but that's probably overkill. The model/kit is to develop and exercise a conceptual framework, not create actual interpreters.
- That's because there isn't "such a matcher" in it. It's a description of popular imperative programming language semantics, not a "model-making kit". I'm afraid I don't understand the rest of your point from "Note that when complete ..." onward. Sorry.
- A poor description. By focusing on making a forecasting engine in imperative code, I don't have to rely nearly so heavily on English and unfamiliar academic notations. It uses and leverages what programmers have to know to get a paycheck.
Your response to Question 3 is incorrect, but rather than mark you down, I shall give you an opportunity to answer another supplementary question:
- By what standards is it incorrect?
- It was tautologous. Your answer would have meant that the literals are all examples of literals, which tells us nothing. The literals are all examples of values.
- So you say.
- What do you think literals represent?
- To what? Human heads? Interpreters?
- To interpreters. What do interpreters do with literals in the source code?
- You seem to be anthropomorphizing interpreters. The "represent" question makes no sense to me. And my "do" answer is the same as usual. Literals affect I/O, but in conjunction with other elements.
- What does the interpreter turn literals into?
- Since they don't necessarily have any direct or universally-defined I/O representation, that depends on the model/implementation's internal vocab.
- Except we have a standard vocabulary in SoftwareEngineering, ComputerScience, and programming that says a literal is a value.
- It's only (allegedly) standard to interpreter constructor people. Beyond that it is or is equivalent to a PrivateLanguage. You seem to be mistaking your personal life/concerns as the center of the universe again.
- Hardly. It's standard usage. See, for example, a page from the PHP manual at http://www.php.net/manual/en/language.types.float.php and note how frequently "value" is mentioned.
- For one, it doesn't necessarily conflict with my model's usage of "value"; and second "value" is overloaded such that a usage leaning toward meaning X does not preclude it from being used and accepted for meaning Y also. -t
- You claimed "value" was a PrivateLanguage. I showed you an example that demonstrates that it isn't. What does your response have to do with that?
- The way YOU are using "values", not the general use of "value".
- How does the way I am using "values" differ from the general use of "value"?
- Let me restate this all. In general use, "value" sometimes includes the "type indicator" and sometimes excludes it (and sometimes is ambiguous about such inclusion). Thus, a model that excludes it and a model that includes it are both NOT "clearly wrong" when compared to colloquial use. -t
- In dynamically-typed languages, a value always includes a type reference, though in some languages the reference is implicit and always "string". That's why dynamically-typed languages are sometimes called "value typed" languages.
- "Always includes"? So you claim. Note that if every value/representation is a "string", there's no reason to explicitly include that in the model. It's like a light switch that only has "On" and no "Off". You might as well rid the switch altogether.
- Yes, that's why I wrote, "in some languages the reference is implicit and always 'string'". However, every operator in the language that manipulates a value's representation knows every value is a string, so there is an implicit type reference.
- The "typeness" is then in human heads, not in the model/notation.
- The "typeness" is in every bit of code that accesses a value's representation.
- If you want to play fast and loose with English.
- Not at all. Operators that work on values of "string" type have to manipulate characters (by the definition of "string"), which is characteristic of a "string" type. String types define character-oriented operations, and a type is a set of values (in this case, the set of character sequences) and associated operators (in this case, those that perform character-oriented operations on character sequences).
Question 3b: A literal is a character sequence that represents a __________.
(a) Value (b) Variable (c) Type (d) Tag (e) Anonymous variable (f) None of the above
- a and c (if delimiters are included as part of the "literal".)
For extra marks, an essay question: Can you explain why you included "(d) Type" in your answer to Question 2, and "(b) Type" in your answer to Question 4?
By some accounts, they contain a "type" indicator (tag).
Good. So for every answer, you've indicated that there are values. Yes?
Yes. I just model them separate or distinct from the "type" indicator, and this is not necessarily different from common usage among developers.
But your answer to question 4 indicated that type and value are returned by foo(). How would they be returned separately?
I model them as a data-structure-like element with 3 parts: "name", "tag", and "value".
Returned values have names? What languages have values with names in their function return values?
In my model, they have internal names (not accessible to the programmer, at least under non-debug modes). I've seen interpreters that do such, by the way. One can even see the dummy/hidden internal names in debugging modes/commands, which is handy because it provides an addressable reference. They may use say a dollar sign prefix because that's not a valid variable name in the regular language such that there's never an accidental overlap. It's a handy modelling and model analysis technique. Most interpreters probably avoid that approach because it's a lot of overhead. But interpreters are designed for machine efficiency, not for human analysis and modelling of the internals.
Hidden names! I thought you were defining a model, not an implementation. Surely something that's hidden has no purpose in a model.
Of course models have hidden/internal parts. And "model" and "implementation" are often interchangeable. For example, God could implement planet movement with hidden epicycles, and we'd never know the difference if done well. We could make an entire runnable interpreter using the tag model, but that would be overkill for the stated goals/purpose.
Models don't have hidden parts, otherwise we couldn't see them in the model. A model may represent parts that are hidden in the real world, but they must be exposed in the model or they're not in the model.
Perhaps there's confusion over the scope of "hidden". I meant hidden to the language user, not the model user. "Internal" is probably a better word.
Can you claim that dynamically-typed languages in general have hidden names associated with return values, or is that an implementation detail in some languages? (Personally, I've never seen a named return value in any language, but I suppose there's always a first time...)
"Have"? It's a model. It's a UsefulLie. Whether such actually "exists" in "reality" (however defined in software-land) is another question that may likely not matter for stated purposes/goals. Remember my epicycle analogy? If epicycles predict planet movements and have other nice properties, then using epicycles for such is not a problem even they may not mirror the underlying "guts" of reality. Watches with calendars don't use models of gravity/momentum either, I would note. The watches use a "fake" model and nobody complains (except maybe for anal drunk grouchy astronomers). I expected you would have anticipates this response from me by now.
A model serves a predictive or illustrative purpose by mapping parts of the model to parts of the real world. What part of the "real world" -- and by that I mean real languages -- are illustrated by "hidden names" in function return values?
Re: "A model serves a predictive or illustrative purpose by mapping parts of the model to parts of the real world" -- Bullshit. It's not required of ALL parts. The watch example illustrates this. (We often have to limit/define what aspect of reality we are modelling, however.)
A model that doesn't map parts of the model to the real world isn't a model. It's a toy, or something else. However, it's certainly reasonable for there to be internal parts of the model that are not directly mapped to the real world, but that are indirectly mapped to the real world. Gears in an orrery come to mind -- there are no gears in space, but they allow the model planetary orbits to be mapped to those in the real world. Thus, the model's gears are necessary; they make the model work. I fail to see how "hidden names" in function return values are necessary -- or even useful -- to make the model work.
I don't know how to explain their utility to you. I'm at a loss to communicate that information to you at this point. Needless to say, the model forecasts accurately with the existence of hidden names. Thus, their worse sin is that they add unnecessary hidden parts (if your claim is accurate) NOT that they break the model's forecasting ability. Thus, "unnecessary gears but not gears that ruin the model's goal".
So, to a model that you've alleged all along is simpler than the conventional explanations -- indeed, that is its main (sole?) reason for existing, it appears -- you add arbitrary and unnecessary complexity? Why not add a fishing reel, a pair of sunglasses and a remote control to your model as well? They'd be just as useful as your function return value "hidden names".
YOU claim it's unnecessary. I don't see that myself.
Please demonstrate its utility, then.
I'm working on that in TopsTagModelTwo.
Re: "because no operation on R [variable] needs to make reference to any T [type]" -- what is getType($x) doing in Php then in your model? -t
{Looking at the type of the value of course.}
Same thing. Just because one has to open an extra door to get to something does not mean it's not being used/referenced. Existence of middle-man doesn't cancel "reference".
{Doesn't mean it is either.}
The getType($x) invocation certainly doesn't need to make reference to any supposed component of variable $x, nor does it need to indirectly find out the supposed type of the value in $x via $x. Given an invocation like getType($x), the interpreter fist retrieves the value from $x and then calls getType() with that value passed in as an argument. The getType() function doesn't know whether that argument came from a single variable, an expression, or some other function invocation. All it receives is a value, and it retrieves and returns the name of that value's type.
That "behavior" is specific to YOUR model, and not a universal truth (based on experimenting with I/O).
It's behaviour specific to MY model, and the implementation of every popular imperative programming language.
For the millionth time, mirroring actual implementation is low my priority list per stated goals and rational. I'm not going to re-re-re-re-argue that fucking point yet again. Arrrrrrg!!!!!
If your model is genuinely simpler than actual implementations, your priorities might have some merit. As it stands, there's no evidence that your model is simpler. Furthermore, because it deviates from actual implementations, you should provide a logical proof that it's equivalent to implementations. Otherwise, how do we know your model is valid?
It's simpler for the heavily-used-parts of my model. The equivalent XML is less characters if you want an objective metric. It's common practice and economically rational to simplify the most commonly used features even if that may complicate some less-commonly-used usages to some degree.
But your explanations are consistently more convoluted, due to the odd conflation of values and variables. Look at TopsTagModelTwo. What's all that code for? For the parts you appear to be modelling, they're more complex than some interpreter internals I've written.
No, they are not. The non-nested XML representations of variables in it objectively have fewer levels and fewer characters, and those are the most common structure(s) being used for the model/kit. I'll agree the explanation of "intermediate values" (or whatever one calls them) would be more complicated, but that's NOT where the majority of thinking and sub-modelling effort and eyeball work will be. I see no reason to complicate 95% in order to simplify 5% here.
The non-nested XML representation has "fewer levels" only because you've chosen to use XML in the first place. That doesn't mean it's conceptually simpler. Your "intermediate" values are indeed more complicated, but as they'll need to show up almost everywhere in even trivial programming, I'm not clear how you justify the claim that it's "NOT where the majority of thinking and sub-modelling effort and eyeball work will be."
XML is what the model user works with. They don't work with whatever notions are in your particular head. If you wish to select a better data structure representation technique (for the purpose at hand), be my guest. And intermediate "values" will NOT show up "everywhere". Your statement is false, or at least misleading. The structure is optimized for this model, not your dinner party or whatever. Embedded "values" are UNcommon in this model. (They are intentionally de-embbed to make them easier to analyze and reference.) If you don't "get" the model, I don't know what else to do at this point. Let the reader choose which model they prefer; you are not the Model Police.
XML isn't a data structure. It's a markup language. The data structure in your model is a tuple. There are better notations for representing a tuple than XML.
That are readily familiar? Or do they have to spend $30,000 to take your courses? (Note I called it a "data structure representation technique", not "data structure".)
Of course they're readily familiar. One of them is used above and you seemingly understood it with no apparent difficulty. I don't know any notation that requires $30,000 worth of courses to understand. A notation is merely a system of symbols used to represent something. The "something" might be difficult to understand -- quantum physics, for example -- and might benefit from $30,000 worth of education, but a notation is trivial.
Your A=B(C,D)-like syntax doesn't have "slots" for actual values/content of the model snippets and "run through" examples. If you added slots, it would look enough like XML that one might as well use XML to avoid the confusion of a PrivateLanguage. It doesn't offer anything objectively new or better over XML. I wish to leverage existing knowledge and representation tools to make the model easier to digest for typical developers.
{What do you think the A, B, C, and D are? Those are the "slots". I'm surprised you even had to ask. The notation appears in early grade school, and has been in use for hundreds of years.}
Not for data-structure-like things. They are not "variables" in the math sense, but "slots" for data, and are used differently. Anyhow, keyword-based structures are more descriptive than positional for "thin" tuples. In examples and pseudo-code, it's easier (more familiar) to reference the parts of XML than positional tuples. Positional only has a documentation advantage when it grows into a table size/shape and we wish to compare "rows". I don't want to invent and describe a new "data description" notation if it gains no notable advantage (nor use an obscure existing one).
Hmmmm, I suppose for bigger examples we could use tables:
varName typeTag Value
------------------------------
foo String My Pet Goat
bar Number 99.95
isOn Boolean False
book String Types for Complete Brain-Dead Morons
{They've been used for data-structures for as long as we've had data-structures. In addition, they are variables in the mathematical sense (instead of variables in the computer science sense). (And the fact that you recognized them as variables even though I didn't tell you that indicates that you knew they were in spite of your protests.) Usually one would use the name of the variable to reference it. E.g. we'd use "A" to reference A. I also find it telling that you use the excuse "I don't want to invent and describe a new 'data description' notation..." as an argument for inventing a new data description notation in favor of using an established one.}
What's "established"? You mean academia, or the real world?
The real world. Simple algebraic notations -- like that used above -- have been taught to every school-age child since the 1800s. XML hasn't.
It's used to represent formulas, NOT data-structures/containers. If you don't like XML, I don't care. Whine all you want about it and cry like a little girl; I don't fucking care.
- {Hmm. I guess http://xlinux.nist.gov/dads//HTML/stack.html is just an illusion.}
- That kind of notation is rarely used in the work-place. You need to get out more.
- It's more descriptive to have fuller names or abbreviations instead of letters anyhow. School math notation is outdated. I'll agree it makes for less writing for formula transformations, but ideal notation for one purpose is not necessarily the ideal for another.
- {Still, it's being used to represent data-structures/containers, contrary to what you said.}
- I didn't say that. Formula-style is not likely a typical programmer's first choice to represent data-structures/containers. If this is going to turn into yet another anecdote fight, then we should end this here.
- {Your exact words are "It's [algebraic notations] used to represent formulas, NOT data-structures/containers." So you did say it, and I provided proof that you are wrong. BTW, I'm not sure why you are worried about anecdotes, I'm not using them.}
- I stand corrected. I should have said "typically used to", but my inner CYA Lawyer fell asleep. My apologies. However, it doesn't change my general argument that it's awkward and unfamiliar for intended use.
Really? What does <var name="..."><value type="..." repr="..."/></var> show that R=(V), V=(P,T) doesn't?
- Shouldn't it be R=(N,V)? Where's the name?
- {No, not all variables have names, and many variables have more than one name. There's nothing to gain, and a lot to lose by trying to force a unique name on each variable.}
- Indeed. Variable names don't participate in the model, so there's no utility in illustrating them.
- How does a model user keep multiple instances straight?
- However he likes.
- For consistency, better examples, and a faster start, I'm including such explicitly in my model. In general, you seem to be lackadaisical with model part identification. If you want clear writing, all model parts and all instances should usually be uniquely and clearly ID'd.
- The part in question (variables) is labelled unambiguously as 'R'. Specific 'R's -- whether they're called p, q, i, MyVar, agentSizeParameterClosureOptionSelection, blah, whatever -- are irrelevant detail, and whether variables are nameable or not in some language is a feature of that language, it's not relevant in the model.
- Because your "model" is poorly documented and skips such. And that's an awkward way to represent "structures", instances, and state change when one actually wants to document properly. Commercial authors rarely use such notation in the better selling programming books. Academic sources often do it out of historical habit, but I ignore those writers detached from the real world (as style guides).
- An introductory textbook intended for programming beginners could explain the model in more detail with coloured diagrams and the like. I'm sure that's not needed here on WardsWiki, for the same reasons it's not needed (and would even be distracting) in academic sources.
- The next best thing is XML, not formula-style notation. Anyhow, I don't want to argue that anymore. I'm using XML, period!
- {That's your choice of course, just don't be surprised if we don't turn out in droves to follow you. After all, you can't seem to articulate any reason to do so.}
- I did articulate good reasons: familiarity to the target audience, better "variable" (slot) naming (not just one letter), and closer visual association between the plugged-in values (content) and the name of the slot. I'd bet money if I could that most programmers would prefer XML for this purpose over your academic-influenced notation. You've made a crappy case for your alternative. It sucks, admit it.
- Algebraic notation is simple, and familiar to every high-school graduate. XML is awkward, verbose, difficult to read, and unfamiliar to many developers. It's generally a poor choice even for the purpose for which it is intended -- data exchange. As a result, your model will be ignored and forgotten the moment this and related threads stop being active on RecentChanges.
- It's rarely used for similar purposes to what we are modelling except in the post-graduate world. I've never seen it to represent tuples or tuple-like things in my schooling, at least not on an exam. Plus, we cannot assume programmers are CS graduates. The other claims are addressed below. I would estimate that 90 percent of existing programmers who use scripting/dynamic languages have some familiarity with XML or HTML, and about half use it or HTML fairly regularly.
- You're presuming a markup language used for data exchange and almost never to illustrate data structures, is going to be as understandable as the notation usually used to describe data structures.
- Data exchange and data structures are fairly close cousins. And, there's no better alternative, based on the stated goals and stated assumptions about the target audience. I weigh. Your attitude seems to be, "If they don't go to my expensive classes to learn the (alleged) right representation for data structures, then f&ck them!"
- Expensive classes? You mean high school? It's free here; I don't know about where you are.
- They didn't cover tuples in my high school (although that was a long time ago, but I've never seen it in my older kid's homework). Matrices are the closest thing I remember. And tuples were barely touched in my CS courses.
- Bad education is never a good excuse.
- I'm not here to spank the world, just to offer a tool that existing humans can use here and now.
- Whether you spank or not, how many existing humans are using your tool here and now?
- You've asked that like 3 times already.
- My question is rhetorical, but pointed: Your model seems to have gathered zero support so far. Why do you think that is?
- It took a while for my "OOP Oversold" writings to gain traction. Eventually they made it into an O'reilly footnote and were cited all over the web. It may be a marathon. It may be all moot. Maybe hundreds of pre Christopher Columbus's sunk and died before one got lucky. I don't know which kind the model is at this point.
- Further, if one wanted to dig deeper into StepwiseRefinement and actually implement (perhaps on paper) the state change API's of the kit, there are existing libraries to access and change XML elements in many common languages. One cannot say the same about your formula-style notation.
- In the vast majority of languages, updating a tuple represented using native features of the language is far easier than using an XML API.
- This is not clear to me. Please avoid "represents" if possible, because that's a non-dissect-able human head notion. And your metrics for "easier" is questionable. Easy to write, yes. Easier to read/grok for target audience, probably not. -t
- In short, something like 'class variable {string value};' so that you can create examples like 'v variable; v.value="blah";' is undoubtedly easier to understand than library-specific XML parser/generator code.
- I want something ASCII-able (or unicode-able) so that language-specific representation doesn't become an issue.
- What do you mean?
- The rules for delimiters, escape characters, etc. is fairly well established for XML because it's intended to be machine-readable & writable, unlike formula notation. If one wanted to extend the model all the way down to the bottom of the StepwiseRefinement ladder (either mentally and code-wise), they could use existing XML interface libraries, or at least know the text handling rules.
For one, nobody knows what the second one is without explanation. What the hell is the comma, for example: that's not common notation. The XML is quicker to recognize and relate to for most programmers because it's part of their work. Second, XML allows one to show "values" (attribute content)
next to the name. That's easier to read than the values being assigned to letters because one doesn't have to do a back-and-forth mental plug-in step of the value to the letters. (Although
WetWare may vary.) The formula style is better if one is
transforming the formula, which is why it's used in school. However, we are not transforming here, at least not as part of normal usage of the model/kit.
The comma is just a comma, same as in English. It delimits items and/or clauses. What makes you think "XML is quicker to recognize and relate to for most programmers because it's part of their work"? We're talking about programmers who have difficulty with the PHP manual and its ilk. They're probably having just as much difficulty with XML.
- They are far more likely to encounter and/or work with XML or similar markup in their work than your notation. Your argument seems to be that since nothing is perfect, a D- option is just as good as B- option.
- If XML is a D- option, why would you use it at all?
- You didn't make a case for that, especially in terms of alternatives.
- I didn't make a case for what?
- Read it again and take a guess.
- Is your rudeness necessary? It is not clear what "that" refers to in "You didn't make a case for that, ...".
- I did it to see how you were interpreting it, being that it seemed a clear reference to me (no reasonable alternative). It wasn't intended as rudeness. I'm just probing your phrase analysis techniques. Like I said elsewhere, your communication style is almost alien-like to me, and I grew curious here to examine ET.
- {And yet, even after he's made it clear that he can't find any reasonable referent for "that", you still won't tell us what you meant by "that".}
- That XML deserves a D- for such modelling usage.
- It most certainly does. It's awkward, verbose, and to many developers, completely unfamiliar.
- "Awkward" is a matter of taste. And it would be more familiar than your suggestion to target audience. Yes, it is more verbose than your suggestion, but sometimes verbosity is better for illustration/teaching.
- Do you have any evidence that XML is effective for teaching or illustration? Or is that merely your tacit assumption?
- Nope, and neither does your preference.
- {Actually it does. It's what's actually used by just about every educator.}
- That's your anecdotal claim.
- {It's not an anecdote.}
- Then where is the OfficialCertifiedDoubleBlindPeerReviewedPublishedStudy of tuple usage in education?
- E.g.: http://www.sosmath.com/CBB/viewtopic.php?t=28560
- That's one sample point from a dedicated math blog! That's light-years away from a decent study on student exposure rates. This kind of discussion is exasperating. I'm tempted to say nasty things.
- {Fine. Have a computer science tuple. http://www.cplusplus.com/reference/tuple }
- I'm not going to assume the reader will read that.
- {And your point is?}
- I use a notation they are already familiar with instead of force them to learn something obscure to save them time and money.
- If algebra and simple programming language constructs are obscure, you've got more problems with your target audience than an XML-based model is ever going to solve.
- Algebra is not, but algebra with tuples is relatively obscure. Plus, it's awkward for illustrating data containers in my opinion. Yes, I know, you disagree.
- {If algebra isn't obscure, then variables and values are familiar concepts. After all, algebra uses them extensively.}
- I believe it to be awkward for the target purpose for multiple reasons already given. I'm going to use what I feel is the best and you use what you feel is the best. Neither side is going to convince the other since we are using different assumption "models" of the heads of typical developers. I'm tired of arguing about this notation issue. Unless I see I NEW argument for formula-style, I'm going to ignore it. -t
- Yes, that is one sample point. That's why I put "e.g." in front of it. That's one more point in favour of tuple usage in education than you've demonstrated for XML in teaching or illustration.
- 2 down, 40,000,000 to go. Since we are playing the eye-dropper evidence game. FireFox's spell checker doesn't recognize "tuple", which suggests it's not common enough.
- FireFox's spell checker is notoriously bad. It doesn't recognise "etc.", "stateful" or "math" either, so what words aren't in its obviously-limited dictionary speaks volumes about FireFox, but it's no comment on anything else.
- Mine doesn't choke on "math" and "etc.". Maybe you accidentally added them to the Exclude list.
- Nope, but who knows what dire quirks lurk in FireFox? However, it is true that "tuple" is a mathematical and ComputerScience term, and given that your target audience is apparently so weak on programming as to be confused by the foundation concepts of values, variables and types, perhaps it's best to use a physical model consisting of toy blocks and buckets -- which I have done successfully with first-year programming students -- as even XML is likely to confuse more than clarify.
- You are full of squishy brown stuff. Those words ARE vague. Languages follow certain patterns and people learns what works and what doesn't via experience. But that's not the same as rigor. Experience can often substitute for rigor. For example, one developer learned to append "+0" to the end of any expression meant to use plus for adding instead of concatenation for a given language to "force" the math interpretion. He/she didn't know the rhyme or reason, they just knew it worked from experience. (One can potentially "see" the reason with a series of controlled experiments, I would note.)
- You appear to be assuming that because there are some developers who don't understand values, variables and types, that those terms must be vague. That is not the case. Some misunderstanding is not evidence that misunderstanding is inevitable, or that the misunderstood terms cannot easily be understood. I've encountered a few similar developers over the years -- mostly programmers who have not been formally trained and who have come from other fields. Invariably, their understanding and ability increases dramatically when I explain the distinction between values, variables and types -- just as I've done on TypeSystemCategoriesInImperativeLanguages.
- Well, as usually described, they are vague to me, and vague to most of the developers I talk to and work with. I go by my experience and you go by yours. Anecdote versus anecdote; claimed experience with developers versus claimed experience with developers. I've found the tag model useful in a predictive sense; some others may also. Just because you don't like it doesn't mean others won't either. Different people may prefer different models, notations, etc. Everybody's head is different and there are different ways to skin the mental model cat. You are not a good test subject because you've been too deep in the guts of a particular model style for too long; like the old mainframe guy trying to use Python for the first time. Your presentation style may indeed work on SOME students/developers. But to me it's alien jabberwocky.
- I suspect it's "alien jabberwocky" to you because you've had neither formal training nor practical experience with computer architecture. Have you ever done a course that covered what CPUs do and how they work?
- Yes I have, and they were usually taught in a very strait-forward "mechanical" step-by-step manner that was clear. You had bits as I/O, and clearly-described imperative rules to process and transform those bits. Or "wires" with timed steps that can be dissected and shown in a linear time-line like a movie one could shift forward and backward by dragging the slider. X-ray the "machine" and make it into a movie. (It was not actual movies, but presented in a way that would be easy to translate into one.) You could learn from their teaching techniques. Imperative, clear parts, clear steps, clear rules, clear interactions (or non-interactions), and clear sequences. -t
- I'm not talking about the pedagogy used, but about the knowledge that would have been gained. Surely you would have been exposed to the fundamental notions of values (data), variables (memory addresses) and types (data types), and how these are concrete elements of the machine, and how programming languages that run on the machine abstract them?
- Each language defines most of them in its own way, and some not at all. And there is a clear pattern to two different kinds of "types": an explicit tag-like element, and "value analysis/parsing" for lack of a better name. This dichotomy exists, it's just that the entrenched stubbornites won't acknowledge it.
- No, there's certainly acknowledgement about language categories. You're correct that there are different categories of TypeSystems. We have StaticallyTyped and DynamicallyTyped languages, and DynamicallyTyped languages can be divided into two subcategories. That is all covered under TypeSystemCategoriesInImperativeLanguages. However, you seem confused about the basis for these categories. In DynamicallyTyped languages, one subcategory associates types with values. The other subcategory treats all values as strings, leaving operators to interpret strings as required. That's the whole of it, and it's rather elementary stuff.
- No, the granularity needs to be at a per-operator basis. Per-language is not sufficient to represent the differences. Witness the difference between Php's is_bool and is_numeric. One looks only at the type indicator and the other parses the value/representation. (It's a dumb design, but that's another topic. I'm here to predict, not fix languages.)
- That too is covered under TypeSystemCategoriesInImperativeLanguages and my description above, "one subcategory associates types with values. The other subcategory treats all values as strings, leaving operators to interpret strings as required", is sufficient to account for it.
- No, it's not "sufficient". For one it uses vague words. Second, it doesn't describe how to tell/test the difference.
- What's vague about it? What is it that you wish to test?
- (It helps that chips are physical things such that physical notions and words from the physical realm (their birth place) can be used to explain how they work. Types are unfortunately far more ethereal.) -t
- They're not so ethereal that they aren't implemented as concrete code in every language that implements types.
- That's why my model focuses on imperative algorithms performed on well-recognized data structure representations. You fluttered off into goofy English and set-based definitions instead of stick with imperative algorithms and concrete data representations.
- If TopsTagModelTwo is an illustration of your "imperative algorithms and concrete data representations", it's verbose, complicated, and difficult to verify its correctness. English and set-based definitions are simple and trivial to verify.
- It's only verbose because you didn't understand the descriptive version of the algorithms such that I have to go down to a more concrete level of StepwiseRefinement. Others not as polluted by academic tradition may just pick it up faster. And one verifies the correctness by experimenting against actual I/O, which is what they'd have to do with your model also. You imply a magic shortcut for verification that does not exist. And the potential ability to do StepwiseRefinement all the way to an actual emulator (for sections) gives one an option to drill down incrementally if they are stuck. If somebody is stumped by a portions of yours, they are stuck; there is no StepwiseRefinement option; they just have stare at the wall and drool, hoping it clicks eventually. StepwiseRefinement is a powerful dissection tool because you can always break parts open into yet smaller parts and see how those smaller parts work together to make up the larger part. Every black box has a key for it.
- Empirical verification is weak compared to logical verification.
- Bull! What Grand Rule is that? For one, how does one know if their givens are wrong if they only use logic? Plus, logical verification is not something many developers have a lot of practice at. Taking a course or two doesn't have sticking power for most people. I use conventions they use daily in their everyday work so that they don't have to remember back 10 years or so to college.
- Empirical verification is always subject to the possibility that the next result will disprove the hypothesis. Correct logical proofs are indubitable.
- PageAnchor mucked-381
- But actual implementation could deviate from your assumed logic model so that you are still subject to that risk. And sometimes quirks stick around because of backward compatibility. For example, suppose a small company hires some interpreter builders to add a simple interpreter into their product for task automation. Each math operator is overloaded to also have a string operation. (Bad idea, but it's the in fad). "a/b" means remove sub-string b from a in its string form. The dull programmer assigned "/" doesn't check with the other operator implementers and uses soft-polymorphism (parse-based analysis) to determine if "a" and "b" are numeric or string. A couple of versions later a user notices the inconsistency with the other math operators (which use the type indicator only) and the company then has to make a decision: keep the inconsistency and slower performance, or break backward compatibility. Since math or accounting is not the primary use of their product, they elect to keep the soft polymorphism in place for "/". Logic won't find that, only experimentation and empirical analysis. -t
- I'm referring to logical proofs about models, not implementations.
- No guarantee your assumptions about the model are accurate. The above scenario is not unrealistic.
- If a model requires assumptions, it's a weak model. The above scenario is about an implementation, not a model.
- No, it's your assumptions about implementation. Plus, I refuse to define languages by implementation for reasons given multiple times.
- I didn't mention implementation, you did. You wrote, "But actual implementation could deviate ..." etc., above.
- Let me back up. Your statement, "The above scenario is about an implementation, not a model", is false in the scenario. It started out (arguably) as an implementation bug, but was incorporated into the "model" to serve backward compatibility.
- You mean the above scenario is not about an implementation, but is about a model? What "implementation bug" have you now incorporated into your model?
- The scenario as written does not make clear whether it was a model mistake or an implementation mistake, but I don't see how it really matters. The point is that oddities and (seemingly) inconsistencies can often only be discovered by empirical examination. The documentation, one's interpretation of it, and/or your supposed model may not match actual language (interpreter) behavior. Road-maps are sometimes wrong. Going to the actual territory is the only way to know for sure.
- If you're concerned about particular language implementations as opposed to a model of language categories, then why have a model at all? Simply refer to the language -- it is its own "model".
- Huh? A programmer cannot realistically see the guts of an actual interpreter. A more digestible stand-in is often preferred. Further, the stand-in only has to model type issues, not every aspect of the language, snipping out distractions and time-consuming complexities.
- Where did I say anything about looking at "the guts of an actual interpreter"? As for a "more digestible stand-in is often preferred", do you really think a struggling programmer wants to learn another language and associated model in order to understand the programming language he/she is already having difficulty with?
- Without an explicit model, one has to juggle fuzzy notions, which is not the ideal. And what's this about "having difficulty with"? As explained many times, one can handle most dynamic languages in a "good enough" way using fuzzy notions and experience-based pattern recognition. And my model is conceptually easy, it's just that it confuses YOU because you are over-exposed to another model. (Yes, that's a judgement call on my part, but so far both sides only have judgement calls about others' WetWare, and not "official" studies.)
- I suspect (and that's a judgement call on my part) you find your model "conceptually easy" because you're referencing what's in your head rather than what you've written. The description of your model is distributed over multiple pages without being definitively described anywhere. That, in and of itself, makes your model difficult to appreciate. See TopsTagModelTwoDiscussion.
Why fuss about that? Does XML really bother you that much? My judgement is that XML is better for
this purpose. If you disagree, so be it. You are welcome to also show your ivory-tower-ized version of the XML (in a separate area) if you wish.
I have a problem with XML if it leads to bizarre notions like avoiding "nesting", with the result that variables and values are obscured and altered in your model such that they differ significantly -- yet pointlessly -- from popular imperative programming language semantics.
This "semantics" thing is in YOUR head and I am not trying to model your particular head. I've demonstrated multiple times that "value" is used in vague ways in practice. You even did so yourself when you suggested output displays "values", which lacks the "T" (type) part of your data structure representation. Quote from you: "I/O doesn't have values? What does output like 123 or "fish" represent if not a numeric value or a string value?" The "T" is nowhere to be found in the output; only the P comes through. (Actually, a better description would be that the output is a translation of the contents of values/variables through output interfaces/operators (such as Print statements) rather than a direct copy of their contents. In some cases it may be a direct copy of bytes, but that's not universal.)
{Correction. You've claimed multiple times that "value" is used in vague ways in practice. You've never actually demonstrated it. That also applies to "type", "have", "do", etc. (I should also note that you just used "value" to describe your view.)}
Indeed. Literals are the output -- hence the lack of a "T" part -- but literals represent values.
"Represents" happens in the head. The interpreter is done with stuff at that point such that it doesn't have to be in the model.
"Represents" is a physical aspect of interpreters and compilers. Each literal -- a sequence of characters -- is parsed, its type inferred, the literal is converted to a representation, and the combination of the representation and inferred type becomes a value. That's what a value is.
So you claim. I want a universal proof, not a model-specific claim. In colloquial-land, "value" is overloaded or ambiguous.
What do variables store? What do literals represent? What do expressions evaluate to? What do operators return?
These questions are not model-specific; they refer to elements of all popular imperative programming languages. How do you answer the questions without reference to values?
No they are not; at least not in a consistent way.
No, what are not? What does "they" refer to? My questions? The elements of popular imperative programming languages? Values? Variables, literals, expressions and/or operators?
But those are used/notion-ized in different ways by different speakers. There is no objective rigor except in a given semi-arbitrary model.
What does "those" refer to?
Those words.
This is a Wiki, so aside from the occasional link to a picture, it's all words. Which words were you referring to?
Values, variables, literals, expressions, operators, types, etc.
This brings us back to the point we were, several paragraphs up. These are the familiar elements of popular imperative programming languages, using terminology recognised by every beginning programmer. How can you avoid making reference to them? Alternatively, if you do avoid making reference to them, how do you ensure that the elements of your model are recognisable to beginning programmers?
Yes, but the vague and overlapping nature of those words requires various compromises, as described below. It's all about weighing tradeoffs. If I use "value" for part X of my model, I shouldn't also use it for part Y to avoid confusion, for example. You seem to obsess on narrow factors rather than back up and evaluate the whole forest.
Note that the descriptions at the top of TypeSystemCategoriesInImperativeLanguages employ those terms without "overlap", vagueness, or ambiguity.
In TypesAndAssociations, I took some of your text from the above and dissected it word for word. After a long discussion, you came to the following semi-conclusion:
- Quote: "There's an inevitable point where the inherent ambiguity of human language makes further precision possible only by being painfully verbose, and a choice has to be made between a short, simple, but potentially somewhat ambiguous description and a long, complex, precise description. The need for the latter is why academic writing is often described as dense and difficult."
That appears to be an admission that your write-up, as it is, is not rigorous. Now you appear to be back-tracking on that. As it is, it's
not long and dense. (Difficult, yes, but all your writing is like that to me.) Words like "has", "in", "represents", "contains", "references", "associated", etc. can have multiple reasonable interpretations.
- Note it has been growing in size since the referenced text analysis such that it may indeed be approaching "dense and difficult". -t
- Actually, the section at the top has remained the same size since it was written, aside from the addition of some XML. The page is in three largely-separate sections. I added two sections since the first part was written, and both are ancillary to the first. The second section is about operator invocation. The third section is a more formal treatment of the first section.
I'm not "back-tracking" on that at all. Whilst the English descriptions unquestionably have no overlap, vagueness, or ambiguity assuming standard English interpretations, I do recognise the potential for misinterpretation that can result with any English description. Hence, in order to avoid misinterpretation of "has", "in", etc., I have provided a more formal treatment at the bottom of TypeSystemCategoriesInImperativeLanguages.
- It's not clear how one translates from the target language to those snippets/patterns, and you are redefining a language with yet another language, which simply stacks another turtle on the existing stack of turtles.
- No, it's simply a (semi-)formal description, using conventional terminology -- so its relationship to any "target language" is implicit and straightforward -- and some standard symbols to avoid the verbosity and potential misinterpretation of English language.
- I find it poorly written, either way.
- I'm curious how you would re-write it, using the conventional terminology but your preferred writing style.
- I already explained my appraoch multiple times repeatedly: I'm not going to rely on English nor an academic style. I'm going to instead use imperative Algol-style psuedo-code that works on XML to model the target language because that's what programmers know because they have to know it for part of their job. English has proven a poor tool for describing "types" so I give up on it until somebody shows how to do it well. No more reliance on words meant for flesh-land such as "has" or super-vague tautology-like terms such as "associated with". FUCK ENGLISH.
- That's why we use formal notations. XML and pseudocode can unquestionably be used as a substitute for formal notations, but it will be interesting to see if model users can avoid confusing the Algol-style notation of your model with the Algol-style notation of what you're modelling.
- Your use of "formal notations", if that's what it really is (I doubt it), is highly confusing. It's the kind of academic gobbledegook only a "professional student" would love; an academic gym rat, one might say.
- As I've said before, my "more formal descriptions" are formal notation where English is especially likely to be misunderstood, and English where formal notations are likely to be abstruse. Mixing formal notation with English is a common approach; note that ChrisDate's AnIntroductionToDatabaseSystems, and DateAndDarwen's writings (which are popular amongst serious programmers and IT professionals), use this style frequently as a compromise between pure mathematics and pure English. Please let me know what you find confusing, and I'll clarify it.
- Chris Date's writing tends to be difficult to many and is not a style I would emulate for the stated target audience. The best selling book titles on databases (outside of mandated texts) generally don't emulate his style.
- Yet, ChrisDate's AnIntroductionToDatabaseSystems is the definitive textbook on relational databases, far beyond its use as a mandated text. There are some texts that rely more on pictures than formal notations, but these have little traction or credibility.
- And this "credibility" metric is defined by you? Lovely. I'm talking about sales volume. That's a better indicator of what regular programmers prefer.
- AnIntroductionToDatabaseSystems is almost certainly the best selling relational database text. Elmasri & Navathe's "Fundamentals of Database Systems" is probably its nearest competitor, and it too uses formal notations.
- Because they are a required textbook.
- That accounts for a relatively small percentage of sales of AnIntroductionToDatabaseSystems.
- I used to spend a lot of time in book-stores, and that claim does not fit my observation. There were usually only one or two copies of Date's book in bigger stores, and none in smaller ones. And the more popular titles, judged by quantity of books in stock and frequency spotted at diff stores, generally don't copy Date's style. Authors/publishers tend to copy the style of the top sellers.
- Popular books aren't on the book shelves because they're sold. The books you see on the shelves are the ones that don't sell.
- The logic of that is suspect. You are stretching things. Shelf-space is a limited commodity such that stores wish to keep mostly the best-selling titles in the primary shelves. Thus, poor-selling titles tend to get moved to the discount bin of "big-box" stores. Date's book drops drastically in value after a given version is not textbook-able. I got mine dirt cheap that way, something like $5.00. If it had big demand outside of education, then it would retain more it's value.
- Previous editions of any book are always heavily discounted, independent of how well they sell. As for limited shelf-space, a book store's goal is to sell out whilst maximising sales, so that no copies have to be returned to the publisher.
- No, they are not. A certain popular Dot.Net book went from around $60 to $35 on prior version, and to $20 on 2 prior versions old. Date dropped to $10 after first version and to $5 after two versions. -t
- That's because the current version of Date's book is considered much more valuable than out-of-date versions, whereas old .NET is pretty much the same as current .NET from a beginner's point of view.
- Dude, you are spinning this faster than Uranus going down a black hole. Other than an OODBMS-related chapter, I haven't seen any significant changes, and nobody cares about OODBMS's anymore anyhow.
- The section on the TransRelationalModel was new, but buyers don't generally do a diff on the chapters to see what's changed. They just buy the latest version because it's the latest version. Except for .NET, of course, because nobody cares.
- You are making up squishy smelly brown stuff.
- I'll assume from the absence of any coherent argument that you agree with me, but lack the courage to admit it.
- "Making up stuff" is incoherent? Making up stuff about Dot-Net popularity and book sales. Is that better?
- I'm making something up? Before the owner retired, one of my clients was a large bookstore. I have some insider knowledge of the field.
- As an occasional shopper of both database books and Microsoft tool books, especially older versions, so do I. I witnessed a sharp drop-off in older versions of Date's book compared with Microsoft developer and database tools. If your experience differs from mine, so be it. We are at another anecdote impasse. And customers do compare chapters between older versions if both are available for inspection. If the new one is $55 and the prior version is $35, then most shoppers would want to compare before making a choice.
- Being an "occasional shopper" is precisely the opposite of having "insider knowledge of the field". I.e., you have none.
- I saw the fucking prices with my own eyes! I'll believe my own goddam eyes before I'll believe your claims that you had lunch with Al Gore while he was inventing the Internet.
- I don't disagree that you saw some prices, but your interpretation of their meaning is naïve.
- Projection. [Removed some text. -t]
- Is your AdHominem necessary? You are aware, I presume, that using AdHominem attacks is an unusually ineffective debating strategy?
- I was venting. You really tick me off sometimes.
- Why? I'm just text on your screen.
- So was my insult.
- Writing "projection" when you lack a counter-argument is a particularly ineffective debating strategy.
- How can I counter-claim a claim of insider knowledge? Smash your head open with a rock and dissect your past experience neurons? With your permission, I'd indeed like to proceed with such an evidence track. If you truly had insider knowledge, then you perhaps could provide a reason why dot-net shoppers would treat book purchases different from Date's book rather than spout a flimsy elitist-sounding insult-like snippet. (I'm not claiming dot-net is great, but that's another topic anyhow.)
- I already explained that. As per the above: "That's because the current version of Date's book is considered much more valuable than out-of-date versions, whereas old .NET is pretty much the same as current .NET from a beginner's point of view." There's a cachet associated with owning the current version of "The Book" -- and a corresponding deprecation of old copies thereof -- that has no equivalent in .NET literature, where the entire genre is generally considered forgettable and interchangeable.
- What experiments or observations did you perform to measure this cachet? I see what books are in real-world developer's shelves and open on their desk. I don't see any particular love for Date's book. And again, other authors don't seem to be in a hurry to copy its presentation style. Publishers copied the "Dummies" style because it sold well, giving us the "Complete Idiots" clones. Why didn't they clone Date? (Outside of textbooks.)
- Elmasri & Navathe's "Fundamentals of Database Systems" is a deliberate copy of Date's style. Obviously the "Dummies" and "Complete Idiots" books are an entirely different category. No one above a certain level of competency gives them more than a derisive glance, but Date's "An Introduction to Database Systems" is a standard at that level and above. I don't know any serious developer who doesn't have a copy, even if they don't work in (or even with) databases.
- The "Idiots" brand did sell well, and I've never heard of Elmasri. Regarding "serious", perhaps the orgs I work for hire and pay "non-serious" programmers, for whatever reason. Non-serious programmers still may want type models that are approachable for THEM. Some don't want to wear racing gloves when they drive to work.
- I'm curious: Do you work for IT organisations (e.g., software houses, IT service providers, large ISPs, etc.) or for organisations that use IT? I wonder if there are fundamental differences in hiring practices between the two.
- The 2nd. I thought we've been over this territory already somewhere. Vuja De. I would point out that the 2nd is probably the most common in terms of IT employment quantity.
Part of a possible solution is to represent the stateful parts in commonly-used data-containing and/or data-representation languages, such as XML or relational tables (as established thru SQL). Then "has", "in", "contains", etc. are more concrete. We can then define terms like "has" in terms of that representation system such that NOW they would mean something precise and commonly inspectable instead of some fuzzy non-sharable hidden head notion.
Considering the only stateful part of your model is the change of value in a variable, use of XML in some imperative model is at best overkill, and at worst outright misleading. TopsTagModelTwo appears to be an excellent (albeit incomplete) demonstration of exactly how it can go wrong.
I'll deal with any alleged flaws in that topic, when they are pointed out.
Fair enough. Please feel free to correct the flaws.
Please put any comments in TopsTagModelTwoDiscussion.
Address 1200 Value F3F4
Address 1204 Value 0000
Address 1208 Value A9A9
Address 120C Value C604
Address 1210 Value C3B2
It all boils down to that at execution time.
Yeah, but that's not what a programmer typically sees and cares about. They see I/O, including source code.
I wonder what you are trying to say. Anyway the reason I put in the indisputable demonstration of the existence of what we know as a value is that I cannot for the life of me see what you wish to achieve with this discussion. I have only ever had to think about values when using the pointers so commonly used in CeeLanguage. Is this a value or is it a pointer to a value? And that question is nearly always resolved instantly.
- Static language tend to be different than many dynamic languages because they don't "keep" the type tag around. The ambiguity that one faces with tag-based dynamic languages generally either doesn't exist in them, or the language has different-looking issues altogether. Thus, it's comparing apples to oranges. I'm only considering dynamic languages in my model. Please don't complain about that; I've heard it before and I wont change it. -t
- In dynamic languages, a variable is a pointer to a value.
- In YOUR model, not per externally observable (objective) language behavior (I/O). Further, the hex dump example above is closer to my model's use of "value" because it doesn't have the type indicator/tag portion in it. Your value "structure" requires a type indicator of some kind, but it's not in the second column above so it wouldn't qualify unless you are overloading "value". (In C, the "tag" is static and not kept in the machine code translation.)
- {Assuming only program input and output, the only externally observable (objective) language behaviour is arbitrary input and output; from that, nothing can be inferred about language semantics (or anything else!) because the I/O can be absolutely anything. If we assume source code can be included, then we inevitably must take into account the semantics of the source code, because otherwise we can only treat code as some arbitrary string -- same as the previous situation. However, if we take into account the semantics of the source code -- and if we assume it's any popular dynamically-typed imperative programming language -- then we trivially observe that a variable contains a value and nothing else, and that the value can change due to assignment.}
- I've already said I include source code as part of I/O. Please clarify and/or demonstrate the last sentence. Show the "proof" that's what variables "contain"? How is containment measured exactly? "Contain" is a flesh-world notion and has no uniform rigor in software-land. I don't care how YOUR particular head thinks of stuff; I'm not here to model your head and your head only. And language "semantics" is not rigorously defined, at least not in a way that can be used by common programmers. Your statement assumes a rigorous definition of "contain" and "value". Otherwise, you are defining vague terms using other vague terms, which doesn't solve the original problem. -t
- {Variables in popular imperative programming languages are generally understood to "contain" or "store" values. Descriptions along those lines appear in virtually every imperative programming language user's guide, and in introductory ComputerScience texts. If you disagree with the descriptions, how would you describe variables? Of course, English is sometimes awkward or confusing, so a notation like R=(V) where R is a variable and V is its value might be more intuitive without relying on (or being confused by) English.}
- My variables "contain values" also. That should be pretty obvious. Brain fart?
- {Above, you wrote, "Show the 'proof' [that values are] what variables 'contain'". By that, I presumed you meant you believe they don't contain values. If that's not what you meant, what did you mean?}
- That's specific to models, though. "Value" is not a clear-cut thing. There are a lot of things that people often call "values", but it has no precise meaning. "Variables" and "literals" have meaning in given programming languages as part of their parse rules, which have to be clear (distinct) so that machines can process them (LexicalAnalysis). "Value" is rarely part of that, and thus don't fit in the way those other parts do ("literal", etc.). "Values" don't appear in source code nor (clearly) in I/O (data parts). Thus, if they formally exist, they are only in a SPECIFIC emulation model. Values have no objective existence. -t
- {Of course they have an objective existence. What is stored in a variable in every model of programming languages? What is stored in every memory address of a computational machine?}
- Bits. What's considered the "value" varies per high-level language model.
- {Can you show that a value in one high-level language model isn't a value in another?}
- Your model associates (contains?) the type indicator with "value" and mine does not. What law of the universe says it must be associated/contained? (Besides, you de-associated it when you said values are in output. You flip-flopped.)
- {It seems somewhat disingenuous to use your own model -- in which the correctness of your model is a matter of debate -- as an example of "a value in one high-level language model isn't a value in another". Can you show that a value in one high-level language model isn't a value in another, without reference to your own model? Furthermore, if you don't associate types with values, how do you invoke the right operators? How do you ensure that writeln(f() + p()) outputs a concatenated string when f() and p() return strings and a number when f() and p() return integers? Finally, values can be represented in output in the form of recognisable character sequences that we normally call "literals". That doesn't mean such literals are values, but they always denote values, because that's what literals are for.}
- We only have 2 sample models presented. As far as "how do you invoke the right operators", that depends on the model. There are different ways to do it, as long as the operator SOMEHOW gets the type info it needs; whether it's via values or floopmucks. And "denote" doesn't mean anything concrete. The output version of "value" has less info than your internal version of "value". The type indicator (T?) is stripped off, or at least is not clearly determinable. You guys keep ignoring that fact. If you want to claim that the output is "literals" instead of "values", please do in a committed way. But, you will then have to live with the implications of that.
- {In a given language, a literals are normally defined to use syntax that unambiguously identifies their type. For example, 120398 denotes an integer, 238913L denotes a long integer, 123124.34 denotes a float, and so on. So, in that sense, the "type indicator" is not stripped off, and is clearly determinable. After all, it is precisely the non-ambiguity of literals that allows them to be converted to values in the first place. So, the output version of "value" has precisely the same information as the internal version of "value". It's represented differently, of course.}
- Output operators can translate/convert between different conventions such that they often don't follow the language's literal (defined) formatting rules. After all, output is often intended for users, not developers, such that "238913L" is not going to be very friendly to users. Any formal definition or parsing rules for output is a domain-specific or app-specific convention, and not a universal truth. Source code, such as literals, has precise component definition rules because it has to be machine-processable (and unambiguous to readers), but output does not have that requirement by default (nor data input). Thus, I ask you again, is output "literals", "values", something else, or does it "depend"? That issue keeps coming up, so let's tie it down now. -t
- {Literals are sequences of characters, which follow the rules of a particular grammar, that denote values. A literal that matches no grammar denotes a string, by definition. Thus, output is always a literal. What literal or literals the output represents depends on the grammar (or grammars) that interprets it. By default, the output (of a program producing an output character stream) is always a character string.}
- In that case are you okay with: 'values don't "universally" exist in I/O except as a domain-specific grammar.'?
- {That's not what I wrote. The output of a program producing an output character stream is always a character string. That is universal, by the definition of "character string". Identification of other literals within that string is grammar-dependent.}
- The "other" you wrote: "I/O doesn't have values? What does output like 123 or "fish" represent if not a numeric value or a string value?" -- That is saying output "has" values. Or is represent-ness different than have-ness? Was that intended to be formal or informal? If formal, where is the formal definition of "represents"? Is representing in the head, or something objectively observable? If the latter, show the bastard for all to see! Come on guys, spit it out!
- {What I wrote says output "has" literals -- that's what 123 or "fish" are -- but literals denote values.}
- "Denote" and "has" lack enough precision to have usable meaning here. And "denote" generally happens in the mind, not in machines.
- {Do you disagree that literals denote values?}
- Without clearer definitions of "denote" and "values", I cannot answer that.
- {In abstract terms, a value is an expression which cannot be evaluated any further. In concrete terms, a value is the combination of some machine representation -- typically a binary string -- and a type that identifies how the representation is to be interpreted. For example, a representation consisting of the bit string 00110110 and the type 'character' may be interpreted as the ASCII character value '6'. The representation consisting of the bit string 00110110 and the type 'integer' may be interpreted as the decimal value 54. "Denote" means to symbolise. For example, the sequence of two characters 54 is a literal that may symbolise, or denote, the decimal integer value 54. The sequence of three characters '6' is a literal that may symbolise, or denote, the ASCII digit '6'.}
- "Abstract" and "symbolize" is in the human head. I don't wish to model specific human heads, I wish to model interpreters. And different humans may likely have different head models such that a statement about one head is not necessarily reflective or useful in another head.
- {That's fine. The interpreter parses literals and turns them into values. We understand that a literal denotes a value, but that's what the interpreter does -- it turns literals into values.}
- In YOUR model, not in all possible models that emulate target language.
- {I'm sure one could conceive of a possible model in which literals do not denote values, but I question whether it would be modelling any real target language. Do you know of any popular imperative programming language interpreter that does not turn literals into values?}
- Arrrrg! You haven't proven that values objectively exist.
- {That doesn't answer the question. And I have deductively shown that values exist. Several times, I've shown that "value" is the only appropriate answer to the following questions: What does a literal denote? What does a function return? What does a variable contain? What does an expression evaluate to? Etc.}
- Models don't necessarily have to answer questions. Your presumption that a question must be answered has not been established as necessary. You keep confusing human notions with external objective truths. Again, I'm not modelling human heads, but I/O. (I may mirror human concepts in the model, but do not claim they are objective external truths, but merely parts of a particular model.)
- {I'm not expecting your model to answer the questions. I addressed the questions to you, not your model. Of course, a model that doesn't address language semantics is rather ineffective. For example, a variable assignment doesn't involve I/O, so a model of I/O can't explain what happens in variable assignment.}
- Please elaborate. Different models can use different techniques to achieve the same results (matching output). There are various trade-offs with each approach in terms of WetWare and machine performance, and the trick is to select the best balance of tradeoff choices (including term usage). And I dispute your claim there is any canonical semantics out there. Again, you appear to be mistaking your favorite head and/or on-paper model for a universal or widely-recognized truth (outside of professional interpreter builders).
- {I'm sure different models could conceivably achieve the same output through different means. Functional and logic programming languages are obvious demonstrations of valid but different models, though implementations of these language categories also (typically) define variables that contain values, expressions that evaluate to values, literals that denote values, and so on. However, popular imperative programming languages all have the same semantics within their given category, i.e., Category S, Category D1 and Category D2, as described on TypeSystemCategoriesInImperativeLanguages. That's precisely what enables programmers to easily move from one popular imperative programming language to another, with only some effort to learn differing syntax and library routines. So, I'm not clear what benefit there would be to having a model of popular imperative programming languages that doesn't reflect their semantics.}
- This alleged canonical "semantic model" does NOT exist on the scale you claim. I addressed the switching language thing elsewhere near the text "When I started using ColdFusion, a D2 language".
- {This canonical "semantic model" certainly exists on precisely the scale I claim, because it's implemented in every popular imperative programming language. I don't know how many programmers are explicitly aware of it, but I've had two conversations with colleagues earlier today that involved recognition of expressions evaluating to values and values being assigned to variables, and they engaged with the discussion in a manner that suggested they understood.}
- Implementation does not matter for the purpose of the model as I already described and explained many times. I'm not going to revisit that here. And yes, such is often called "value" in colloquial-land, I don't dispute, but it can also be called "result of an expression" and other things. If there were no cost in OTHER areas of the model to model "values" the way you do, I indeed WOULD do that way. But other factors override it as already described multiple times repeatedly redundantly in multiple places repeatedly. You ignore these tradeoffs, instead repeating your same point over again in slightly different ways. You exaggerate the importance of having explicit and separate "value" structures in a model (for given purpose). I don't dispute it helps in fitting fairly common notions, ALL ELSE BEING EQUAL; but all else is NOT equal. I will make 10% of the model worse to improve a different 30%, for example. You keep harping on that 10% like a savant, but fail to address the 30%. -t
- {What is the basis for your 10% and 30% figures?}
- That's merely a hypothetical example. Like I've stated many times, users of the model will be dealing mostly with variables, not "values" of the kind you describe. It's rational to simplify the parts of model used the most often even if it results in complicating (or making more confusing) OTHER parts of the model that are used less frequently.
- {When you say, "users of the model will be dealing mostly with variables", what do you mean? Do you mean your users don't frequently invoke functions, and don't frequently write expressions?}
- I've mostly focused on modelling the built-in operations, not user-defined functions. UDF's are rarely where the puzzles lay. And expressions are broken down into individual assignments to "internal variables" in order to make the parts examinable on a smaller scale, as shown in TopsTagModelTwo. (I realize that it's possible that doing such in theory could make it produce difference results from the original in some language designs, but I've yet to see it happen in practice for dynamic languages.)
- {What's an "internal variable" except your personal moniker for a "value"?}
- Like like I I have have said said many many times times, "value" is overloaded in practice. If we use it for purpose X in a rigorous model, we shouldn't also uses it for purpose Y. In colloquial land you can be sloppy, but not in a decent model.
- {So you've claimed many many times. However, you've never proven it to be true. At best, you've shown that "value" is sometimes used where "literal" would be more precise, but as a literal always denotes a value, it's perfectly acceptable. I know, now you'll claim that "denotes" is vague or some such. Don't bother -- we've been over it before.}
- LIAS, we are at an AnecdoteImpasse. Neither side has anything close to an OfficialCertifiedDoubleBlindPeerReviewedPublishedStudy. And "denotes" does not mean anything specific to me. It's a fuzzy head notion. I can't do anything further with it.
- {In this context, "denotes" has a precise meaning. A literal that denotes a value means, precisely, that a given literal is consistently translated by a parser to a specific value of a specific type. E.g., 123L in Java denotes, or is translated to, a long integer. 12.34 in Java denotes, or is translated to, a float. Etc.}
- Where are you getting this definition from?, and how does one objectively verify your processing claim? Java perhaps could keep "123L" as-is and nobody would know the damned difference as long as the end result (output) is the same. You seem to be shifting back into the implementation angle, which I generally reject.
- {It's a conventional definition. Do you disagree that literals denote values? It's true that Java (or any other language) could keep "123L" as-is. That would be entirely reasonable. It wouldn't change the fact that literals (i.e., character sequences) denote values (a representation and a type). It's entirely possible for a literal's character sequence to be its corresponding value's representation.}
- Then the conventional definition is fuzzy. "Denote" is a head process such that I cannot make any objective statements about it existing or not existing. Symbols have meaning, but that meaning is in human heads, and human heads vary. Denote does not objectively happen outside of human heads in a universal way. I suggest you stop using that word. It doesn't communicate anything specific enough to use here.
- {How interpreters deal with symbols isn't in "human heads", it's in the code that implements languages.}
- That's "processing I/O", not denoting.
- {Don't you mean LexicalAnalysis and parsing, rather than "processing I/O"? A parser and/or interpreter typically implements denotation, by converting the sequence of symbols in a literal into a value that can be manipulated by the system. If a literal doesn't denote a value, then what does a literal denote?}
- I'm not sure that's a universal truth. But in my model I'll agree that a literal "denotes a value", but it also denotes a type. Thus, technically "a literal denotes a value" in my model also. However, that's not the same as saying "a literal denotes ONLY a value".
- {The notion of "universal truth" applies only to mathematics, and there only in certain contexts. In all others, it is context sensitive. However, there are unquestionably terminology conventions -- which we typically call "definitions", or "usual definitions" if there is some variance -- in every field and ComputerScience is no different. By the usual definitions of "value", a literal denotes only a value, but this is taken to mean that the type is implicit. In other words, 123.43 denotes a value which is of floating point numeric type.}
- I don't deny a value can mean that, but it's overloaded to mean other things also.
- {That's fine. If it's the case that a term has multiple meanings or its single meaning can be interpreted in multiple ways, simply make it clear what definitions you're using and how. That sort of terminological clarification is a keystone of technical and scientific writing.}
- I thought I did. But keep in mind that English is limited in expressiveness because it wasn't meant for virtual worlds. Sometimes you just have to show.
- {No, what you did was replace existing terminology, not clarify your use of it. You didn't use "type" or "type reference", for example, you used "tag" in its place. You didn't use "value", for example, you used "anonymous variable" in its place.}
- No, I used "value" for something. You just forgot. And I called it a "type tag", not "tag", except as a shorthand.
- {In the absence of a definitive page describing your model, it's difficult to determine whether you use "value" or not. I seem to recall a time when you rejected it categorically. As for "tag", whether you call it "type tag" or "tag", it's still a replacement for "type" or "type reference". You are still replacing existing terminology without defining it, and not clarifying your use of familiar terminology.}
- I'm not sure what you'd qualify as "definitive". Your idea of "good technical writing" is oh so very different from my idea of it. It's clear from the XML representations that my model has something called "value". Whether it fits your qualifications for "value" is another matter. And please don't claim there is an obvious definition in ComputerScience books or the like. I'm tired of that worn-out response.
- {I'd consider a single page where your model is comprehensively described to be "definitive". Whilst you may consider my referring to texts as "worn-out", they do provide the definitions that you seem oddly desperate to avoid.}
- So you say.
- {Yes, I do. Take a look. You'll see.}
- I did. I looked, I saw, I read, I found fuzzzzz.
- {How do you distinguish fuzzy writing from fuzzy understanding?}
- To be honest, I can't. But I've seen good technical writing and know what works (at least for my WetWare), and that is NOT it.
- {I think if all the technical writing you've read still hasn't clarified values, types and variables -- pretty fundamental stuff -- maybe you should try some exercises to think about it differently.}
- I did, and came up with the Tag Model and TypeHandlingGrid.
- {I'd be curious to know how you think programming languages actually implement values, types and variables, as opposed to how you'd prefer to model them.}
- Why does it matter? We have enough issues on our plate such that we shouldn't invent more without reasons. I'm not attempting to mirror actual interpreters, at least not as a primary goal. They are designed for machine efficiency, not necessarily human grokking efficiency. Different primary goals.
- {I think it might significantly help me to understand and appreciate your model, and its justifications, if I understand your interpretation of values, variables and types.}
- My interpretation of implementation? Or how I personally prefer to think of them? Those may be two different things. In my opinion, it is probably not good to get hung up on such. The tag model does not require a solid definition of any of those; the names used for the parts are merely UsefulLies for the purpose of the model and are not intended to fully match actual usage (and I doubt any model could match 100%).
- You claimed that the model doesn't exist. If it's what is actually used, it most certainly exists. (BTW, why should we care if it's also called by other names? After all, four also goes by the name "quatre", "2+2", "square root of 16", etc.. This doesn't make "four" ambiguous, vague, or whatever.) As for trade-offs, it would help if you actually demonstrated that there was one, rather then simply claiming that there is.
- That what model doesn't exist? There are multiple possible models of I/O, but none are canonical. And I have explained the tradeoff many many many times. If 10 times doesn't do it, then 11 probably won't either. We are going around in circles. I give up. LetTheReaderDecide if I've explained it well or not; I'm not going to care about YOUR opinion anymore. The non-nested structure looks, feels, and tastes simpler to me and is less code volume to MY eyes. If it doesn't appear that way for you, so be it. I don't build models for alien life forms. You view everything so different from me that mutual communication simply does not seem possible. And our assumptions about and experience with typical developer WetWare is so vastly different also.
- {If you're going to insist on a "model of I/O" that doesn't reflect conventional popular imperative programming language semantics, then you have to prove that your "model of I/O" is equivalent to conventional semantics, because popular imperative programming languages are based on conventional semantics. Otherwise, how do we know the result of using a "non-nested structure" (?), or whatever other deviations your model makes -- like not having values, or not having values in the usual sense -- will still produce the same results as the conventional semantics, and hence, the same results as actual popular imperative programming languages?}
- There is no clear-cut objectively-observable canonical "semantics" of languages. It's a fictional character in your head. As far as comparing models, it's a string comparison: output_model_x == actual_language_output.
- {Of course there is objectively-observable canonical "semantics" of languages. It's implemented in every working interpreter or compiler, and trivially observed to be precisely that described at the top of TypeSystemCategoriesInImperativeLanguages. Is there anything there that deviates from the implementation of popular imperative programming languages?}
- Like I keep having to repeat, most programmers don't care how interpreters are actually built, and the majority never have built one. The model you link to is one possible model (to match I/O), but NOT the only. It's not canonical in terms of regular developers.
- {This has nothing to do with how interpreters are built. If your "model of I/O" is going to deviate from conventional language semantics, you've got to prove logically that your model accurately reflects conventional language semantics. Otherwise, how can we trust your model? All 'output_model_x == actual_language_output' does is prove that a given case works. There might be an infinite number of cases we haven't tried yet that will fail. Without some clear correspondence between your model and conventional semantics, we have to assume that is the case. That categorically invalidates your model until such a correspondence is provided.}
- How can something deviate from conventional semantics when there IS NO convention semantics? (At least not a single, clear, stable set.) If you come up with a clear test for "matching convention semantics" besides "I say so", please produce the formula or algorithm.
- {The notion that there might be a formula or algorithm for semantics is nonsensical, but it's obvious there are conventional semantics in popular imperative programming languages. It's part of why we call so many languages "Algol-derived" -- they share both syntax and semantics. It's what makes it possible to trivially move from coding in C# to Java, or from Ruby to Python, or from C# to Python. It's what makes you immediately recognise what an assignment statement like "a = 3 + foo();" does in every popular imperative programming language. You do recognise what it does, don't you? Why do you think that is the case?}
- But there are multiple predictive models that work for things like "a = 3 + foo();". Just because multiple people get the same answer does NOT mean they are using the same model to process it in their head. I would also note that the D1 model works on D2 languages a vast majority of the time (using typical code), and vise verse. Thus it's possible to use the wrong model and get the right answer (prediction) almost every time. And then spot-fix (work around) any differences in production code. Perfection in one's head model is not necessary to get real work done. Many developers may be walking around with approximation models in their head.
- {There are "multiple predictive models that work for things like 'a = 3 + foo();'"? Can you give an example? Where are these documented? It doesn't matter what misconceptions developers may have about language semantics -- the goal is not to model misconceptions, but to model the real world.}
- My model predicts it. What exactly is "real world" here? Haven't we been over that already? I define it in terms of I/O, and I/O either does not have values, or does not have YOUR kind of values.
- {We don't know that your model consistently predicts all such statements, because you haven't proven it to be so. Because your model deviates from convention, you need to prove that it reliably predicts the output of languages built using the conventional model. The "real world" we're talking about are real interpreters and compilers, built according to real language specifications, based on conventional semantics.}
- If you spot a clear-cut flaw, then point it out.
- {You're the one proposing a new model. The default position for any new model is scepticism; it's up to you to prove its validity.}
- Have you done this "proof" thing with YOUR model?
- {There's no need. It follows precisely the approaches used to implement language interpreters and compilers.}
- I do not use implementation is the reference point for reasons already given. If you stick to that, they you MUST use stacks in your model, or else readers will think you are a stinking cherry-picking hypocrite. (I cannot even tell you what I think of you; it would cause McAfee? to shut down the PC.)
- {By "precisely the approaches used to implement language interpreters and compilers", I mean the model is based on the language semantics implemented by language interpreters and compilers.}
- Most regular developers do not care about actual implementation. It's a PrivateLanguage among interpreter builder personnel. Plus, actual implementation is often optimized for machine performance, not human grokkability. Stacks and caching, for example, complicate grokkability.
- {I'm not referring to implementation mechanics -- like stacks and caching -- but language semantics, like variable assignment and expression evaluation.}
- {By the way, why the veiled insult?}
- Because it was less rude than the unveiled one I almost used.
- {Why do you want to insult at all? What good does it do?}
- I would also like to point out that actual interpreters use stacks, yet most head models do not use stacks. Thus, if you use interpreter engine builder conventions as your canonical semantics model, you are already deviating from common notions. My model is closer to how expressions are evaluated in typical Algebra courses: break big expressions down into smaller expressions incrementally. This also helps show the steps so the teacher can see and mark where the student messed up.
- {A stack was only used (below) in an application of a model, in order to illustrate how actual implementations often handle values. Stacks are not part of the model itself; note that TypeSystemCategoriesInImperativeLanguages doesn't include stacks. Note also that the examples below show how conventional semantics can be applied to a specific typical imperative programming language statement, precisely by breaking "big expressions down into smaller expressions incrementally".}
- The two common approaches to fully implement such models are a stack or breaking expressions down incrementally. If you follow the second, like I do, good for you.
- {I don't "follow" either one. I'm not even sure what "follow" means in this context. Whether a stack is used or not, expressions are "broken down" incrementally in every interpreter or compiler; I know of no interpreter that evaluates expressions via some magical gestalt. What I demonstrated below is that interpreters usually use a stack to manage values, but for illustration purposes you can dispense with a stack and label the values.}
- So you are agreeing that a model designed to serve as an illustrative model may deviate from models used to actually implement languages such that WetWare is a driver of model design at least as much as matching actual implementation.
- {An illustration of the application of a model may introduce labels and other artefacts to aide in understanding. It's akin to sticking paper labels on a model aeroplane that say "wing", "aileron", "propeller", etc., after it's built. However, these are not parts of the model, and it should absolutely be made clear that real aeroplanes don't have giant paper labels on their parts! The language model itself should absolutely not deviate from that used to actually implement languages.}
- The labels are dual-purpose in my model: they both provide a name with some human meaning, AND serve as a reference ID in the model to the specific part. And I thought we got away from the "implementation" thing as our reference point. Are we back to that again??? Sigh. Do you like the smell of dead horses or something?
- {You are conflating use of the model with the model itself. Externally labelling parts of a model in a specific application of it is reasonable, but actually including labels as part of the model -- particularly in parts that do not have labels in implementations -- is unnecessary. For example, implementations do not label values. Therefore, including a "label" attribute in values in the model would be incorrect. However, a demonstration of how the model works -- used on an example -- might label the particular values used in the example as an aide to following the demonstration.}
- Again, I generally don't use actual internal implementation as the reference to the model for reasons repeatedly given.
- {That's fine, but if you label values (for example), your value labels will serve no purpose in the model itself. Labelling values may be useful in illustrating application of the model -- e.g., "look at value #3 for an example of how blah blah blah happens" -- but your model would not benefit from having (say) a 'value retrieve_value_by_label(labelname)' operator.}
- No, they serve as "part ID's" used by the actual processing itself of the model. I agree such is not "necessary" to produce an accurate (but different) model, but this model is tuned for human consumption instead of machine consumption. Similarly, an implemented model doesn't "need" caching to work, but caching overall speeds up processing by the machine. Grokkability sometimes takes precedence over parsimony. Plus, if I don't label them, then I have to use a nested structure (like yours), which complicates a heavily-used aspect of the model.
- {Why does your model require "part ID's"? As for "have to use a nested structure", isn't that an artifact of your use of XML? Note that the non-XML structures at the top of TypeSystemCategoriesInImperativeLanguages don't require a nested structure.}
- It requires them for explicitness in references. They usually make things clearer. If I see a clearer model that doesn't use them, I'll reconsider. As far as nesting, your tuple version also nested "type". I suppose one could use references (pointers) instead, but that can make the model/state harder to read in my opinion.
- {Are your part IDs used within the model, or are they only used outside the model? For example, at the bottom of this page I use pseudo-code to show application of the model -- e.g., "v1 ← invoke('bar'); v2 ← convert_numeric_literal_to_value("3"); etc." -- in which values are given labels like 'v1' and 'v2'. I merely label them as I use them, but the labels aren't part of the model -- they're only part of the illustration -- so values don't need a "label" property.}
- They are only used within my model. Expressions are broken down into intermediate expressions, and an "internal" variable is created for each intermediate expression. Thus, they can processed just like "external" variables, allowing conceptual and coding reuse of a concept. (I have the user do the expression breakdown to keep the model simple, but in theory the machine could also. I've seen compilers/interpreters? (don't remember which) that actually do such. A special character in the var name is used to ensure the hidden variables don't overlap with external ones. The external or production interpreter does not allow the special character in var names, but the internal or debug-mode one does allow it. Quite clever, I must say.)
- {You mean you actually have an operator in your model akin to retrieve_value_by_name(name)? I'm not sure what you feel is clever about this; it's both trivial and completely unnecessary. You seem to be conflating your model of interpretation with actually creating an interpreter, except your interpreter is going to be unusually (and pointlessly) complex. It's like creating a see-through plastic model of a car engine with added, unnecessary sub-assemblies that aren't found in real engines. It's like labelling the parts, but making the labels necessary for the engine to run.}
- I don't know the precise definition/implementation of your said function, and so won't comment on it. And I agree that a full version of my model would be fairly complex in some areas, but this is the trade-off for making other parts clearer. I stated my goals, their priorities, the purpose of the goals and related WetWare assumptions, and tuned the model to fit these goals and priorities. That's what good and experienced engineers do. Doing such may leave the low-priority areas a bit long-winded or slow. You guys don't seem to get tradeoffs, looking for the One True Factor instead. This suggests a lack of field experience. Good software engineering is the art and science of tradeoff management. Schools barely teach that, focusing on finding The Magic Formula instead. I actually LIKE your engine kit concept despite being an attempted mock. An x-ray view of the model with well-labelled and exaggerated key parts is exactly the kind of thing I'm striving for. The target audience will NOT be using the model to build actual interpreters such that its "toy" nature is NOT A PROBLEM. The alleged "real" engineers will fuss and mock because the sanctity of their One True Model is being challenged, but I say fuck those bastard monks! The 1300's sucked. Time to move on. People need different models for different purposes, and the One True Model view is dying or should be dying. -t
- {I'll ignore your AdHominem attack. I think you've misunderstood my "attempted mock". I have no objection to your "engine kit". An equivalent for interpreters is an excellent thing. To make it clear what I object to, I shall copy-and-paste my point above, but this time with my objections highlighted in bold: "You mean you actually have an operator in your model akin to retrieve_value_by_name(name)? I'm not sure what you feel is clever about this; it's both trivial and completely unnecessary. You seem to be conflating your model of interpretation with actually creating an interpreter, except your interpreter is going to be unusually (and pointlessly) complex. It's like creating a see-through plastic model of a car engine with added, unnecessary sub-assemblies that aren't found in real engines. It's like labelling the parts, but making the labels necessary for the engine to run."}
- My replies addressed all the highlighted parts. There was no need to repeat them. As repeated below and elsewhere, modelling actual interpreters is not primary goal. The tradeoffs I made are consistence with my stated goals and the ranking of them.
- {I don't see how your reply addressed "all the highlighted parts". It appeared that your reply was mainly AdHominem attack and a general rant about schools, One True Models, and "bastard monks".}
- It's only about 40% AdHominem attacks.
- {The other 60% appears to be a general rant about schools, One True Models, and "bastard monks". It's not clear what those have to do with my highlighted text.}
- I tried my best to answer. I cannot read your mind. Let me triple clarify my response to "unnecessary sub-assemblies that aren't found in real engines". The primary goal(s) is NOT to model "real engines". The model is NOT intended to teach about how actual interpreters are built; it's NOT for interpreter-building courses or students. That is not what is for. Thus, complaints about "extra parts" compared to actual interpreters are mostly moot. Your complaint simply does not matter much; it does not affect the primary goals, as stated. It does not "ding" the satisfaction of the primary goals of the model and is thus not significantly harming the intended purpose of the tool. This is as clear as I can make it. (As far as whether the alleged "extra parts" are helpful to WetWare purposes of grokking the model, that's where we disagree on the WetWare of typical developers and get stuck in an AnecdoteImpasse. No use rekindling that debate.)
- {What's not clear, then, is what your model is for. You've said repeatedly that it "predicts I/O", but that is a vacuous response. A working interpreter or compiler is all you need to "predict I/O", and given that an infinite number of different programs can produce the same run-time I/O, it doesn't seem particularly useful.}
- It's to model type behavior in a way that one can see the model in action. A working interpreter will transform input and source code to output, but you only see the I/O, you don't see the guts. The tag model helps one create a mental model to predict FUTURE type-related behavior and perhaps remember current. It's along the lines of your transparent engine kit above: one can see the parts moving so as to form a visual/mechanical/mental model. Many people want a mental model of what's going on under the hood, not just look at output alone.
- {What is it that you see the "parts moving" in? What is the input and output in a statement like "a = foo() + b;"?}
- One sees the intermediate steps and state changes in individual "data bins" of the model. There's an explicit slot for the type tag/indicator, and another specific slot for the value/representation for each variable (external and internal). In this case, the "input" is just the source code and the output is the result (which depends on the implementation of foo() in the source code). It has no input file in this case. (Your forgot the Write statement; otherwise the program produces zero bytes.)
- {I didn't forget the Write statement; it's at the bottom of my 30,000 line program and "a = foo() + b;" is near the top. How does your model help me understand it? And if the "input" is the source code, where in the source code are the "data bins"? I just see a bunch of ASCII characters, because all syntax rules tell me are what sequences of characters are valid. "Data bins" sound like semantics, not syntax.}
- To empirically examine (to test) expressions et al, an output statement(s) of some kind will eventually be needed. I wasn't sure of the context of your example expression. Do you mean "understand" the expression? In what sense? I'm providing a prediction model (for type-related issues), not math class. The intent of the model is not to explain or understand expressions; it's not tuned for that purpose (and is merely a bonus if it provides such insight). The so-called "data bins" are an internal part of the model. They are a UsefulLie, as all such models and model parts probably are since there are multiple paths between a given instance set of input and output. I do not believe there are objective "canonical semantics" of the kind you talk about, barring clear evidence. The "data bins" are a visible internal part. They are made that way to make the UsefulLie of the "fake" guts more visible and dissect-able and clear and reduced to the smallest practical form possible so that there are no vague or opaque compound blobs in the model.
- {If your model doesn't help me understand what's happening in line 14 of a 30,000 line program -- where the only output is on line 29,922 -- then I'm not clear what it's intended to do. How does a "prediction model" help me understand why the assignment on line 14 didn't put the value I expected in variable 'a'?}
- (One difference between the car engine model analogy is that we are not modeling performance, only results (I/O). We shouldn't have to worry about modelling performance artifacts.)
- {Using the "car engine model analogy", does that mean your model consists entirely of "rotating shaft", "acoustic vibration", "heat" and "exhaust"? That is, after all, the "output" of a car engine. That's not a model, and there are many, many things that aren't a car engine that produce exactly the same things.}
- That's largely because the analogy is limited in its utility. The purpose of the tag model is to predict output, not mirror actual interpreters. But we still want the parts and workings clear and transparent to the model user so they can see and follow the model in action.
- {If your tag model only predicts output, but doesn't "mirror actual interpreters" -- which I'll take to mean that it doesn't reflect actual programming language semantics -- then how does it help the user to understand actual programming languages? If all we need to do is "predict output", there's an easier way to do it: Just run the input through the language implementation and see what comes out.}
- Oy! First, this "language semantics" of yours has not been scientifically identified and isolated such that we cannot tell whether something is using it or not. And the second statement is just odd, back to square one. The purpose of a transparent model is to be able to "see" what it's doing. You can't do that with a black box.
- {Semantics isn't something that can be "identified and isolated" any more (or any less) than integral calculus. It's not something we discover under a rock, but something we create. The starting point of every language design -- at least those that aren't simply syntax tweaks -- is to define the semantics, i.e., "How is this language going to manipulate the machine?" and "What is this language going to do?" and "What are the parts of the language and how will I use syntax to describe and manipulate them?" These are fundamentally questions of semantics. The answers to these questions are described in terms of grammar -- i.e., machinery to determine valid sentences -- and semantics -- i.e., machinery to implement what we want valid sentences to do. If you're defining a "transparent model to be able to 'see' what it's doing", that's excellent. The "what it's doing" are the semantics of the language, hopefully accurately reflected by the model.}
- We have very different notions of "do". The language is defined and verified by I/O, NOT by processing steps. I do not recognize the processing as canonical (without explicit evidence it is or should be canonical on some certain model). There are multiple legitimate paths between any given I and O to match the reference implementation. To say that any one "do" path is "better" than another "do" path requires clearer (and useful) criteria for "better". So far you have not identified such in a clear way, at least not to me. If your fuzzy words mean something to somebody else, that's wonderful, but they are fuzzy to me. Different heads (thought processes) and different machine-based models can use different ways to match the target I/O profile.
- {So you're saying a program that generates the first twenty prime numbers is the same as 'writeln("2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71");'? If not, why not? They both have exactly the same I/O, unless you consider the sourcecode to be input to the compiler/interpreter. If you do, what is it that makes the prime number generator different from the prime number printer?}
- Sigh. I thought I made in clear in ValueExistenceProofTwo that we are not testing just one instance of input against the reference interpreter. Arrrg. If a series of write statements could be made to match the same output for every possible (or tested) input set (data and source code), then indeed I would consider it a valid model. However, I've never seen it done, and weighing the internal grokkability of the model is the second step after demonstrating the model is predictively correct.
- {You are aware that the difference between "every possible" input set -- which is clearly infinite -- and "tested input set" (which must be finite) is infinite, yes? Given that it would take every possible input set to validate your model, and that the "tested input set" must be finite, doesn't that mean -- since infinity divided by any finite positive integer is zero -- that your "tested input set" is, relatively speaking, completely insignificant, i.e., worth zero, in terms of validating your model? Furthermore, how can your model be considered grokkable if it can't even be considered reliable, given that it's not validated?}
- This was discussed in the "two" topic. In short, we use spot sampling as an approximation of testing every theoretical combination.
- Re: "There might be an infinite number of cases we haven't tried yet that will fail." -- That's true of ANY model. Any generalization you form may turn out to be wrong. Remember scenario: mucked-381? Empirical testing of each and every operator is the only way to know for sure.
- {Without a mapping between your "Model of I/O" to the conventional model of semantics, we have to assume your model is incorrect. Furthermore, your model has not been the basis for any interpreter or compiler. Therefore, we cannot assume it matches any existing interpreter or compiler, let alone any of those that implement popular imperative programming languages.}
- There is no reliable "conventional semantics" for "values". You can claim it a millions times and I won't believe because of the volume of repetition alone. You need to isolate and measure it scientifically before claiming it's something objective and not overloaded. And your "actual interpreter" hurdle is an artificial hurdle made up out of the blue. If you can identify a clear flaw, please do. The stack issue (above) would fail your model under such a rule. Further, our models possess essentially the same sub-components, they are just packaged different on the larger scale. You have not identified a FUNCTIONAL purpose of the more complex (nested) structure (other than perhaps machine performance).
- {If there are no reliable conventional semantics for values, then what does TopsTagModelTwo actually model? I presume TopsTagModelTwo is intended to model a category of languages, is it not? If so, there must be some conventional semantics -- which define some language category -- that it's modelling. No?}
- I/O
- {For modelling purposes, user input is indistinguishable from hard-coded constants, so external input is irrelevant. Given that a program consisting of "write('3'); write('4')" produces the same output as "a = 30; b = 2 + 2; write(a + b)" -- indeed an infinite number of programs can produce exactly the same output -- I can't see how modelling output is useful. However, if we treat source code as input -- which I think you've mentioned before -- then I can only assume you're taking into account the language semantics, because if you don't take into account the language semantics, you could provide syntactically correct but semantically invalid source code as input. I presume that's not your intent. So, how does "modelling I/O" differ from "modelling semantics"?}
- Yes, I do include source code as part of "input". I don't know how to measure "language semantics". It appears to be a head thing, not an objectively observable thing. Yes, I do try to select model design paths to match common WetWare (based on my experience with human programmers). Whether that's the same as what you are calling "language semantics", I cannot tell at this point. We need a pass/fail (or score) test to say whether a model fits "semantics" or not if we are going to go beyond personal anecdotes. It's called SCIENCE. Read about it.
- {You appear to be conflating natural science with the mathematical sciences. The semantics of programming languages are axiomatic and languages are derived from them.}
- Math is NOT science (by itself). And some axiomatic ideas are vague. They are fuzzy UsefulLies?.
- {Indeed, math by itself is more rigorous than science. Can you give an example of vague axiomatic mathematical idea?}
- Not at the moment. But one's head model doesn't have to be mathematical (although in theory everything might "be" math). One can build a calculator out of Tinker Toys, and use this mechanical model as a thought model also, even though it's math-related.
- {I don't disagree, but I'm not clear how that's related.}
- One does not have to use "axiomatic math" to predict output (or build a model). One can "reason" about it mechanically, or even a whole group of people, like Charles Babbage's students building their steam-punk computer.
- {"Mechanical" reasoning is a form of mathematical reasoning. I have no idea what you mean about "Charles Babbage's students" or "their steam-punk computer" -- that sounds like something out of a fictional movie -- but mechanical computers are equivalent to electronic computers in terms of theoretical capability; they differ only in physical limitations.}
- Everything is potentially math, and thus it may not make sense to divide processes or assumptions into math axioms and non-math axioms in the first place.
- {I'm not sure what "complex (nested) structure" you mean. Do you mean variables? The "nested structure" (?) is simply a reflection of the fact that variables contain values. However, I don't care whether some "nested structure" is used or not. What concerns me is that your model associates types with variables and not with values, which overtly and explicitly contradicts typical understanding of dynamically-typed languages. Beyond anything else -- such as your PrivateLanguage terminology (e.g., "tag") -- I find that aspect particularly objectionable, because it's particularly likely to cause confusion. It's completely unnecessary, too -- dynamically-typed language semantics (and implementations) do not require variables to have a "type" property of any kind, but Category D1 languages do associate a "type" property with every value.}
- Nesting does NOT de-have-atize (de-contain) types, like I've already explained. I refuse to accept your claim of that usage; it goes against colloquial notions, per egg/ring example.
- {The fact that you can conceptually regard a variable has "having" a type because it contains a typed value does not mean a DynamicallyTyped language interpreter -- or a model of a DynamicallyTyped language interpreter -- needs to examine a "variable's type". In fact, no matter how you define a "variable's type" -- whether indirectly via the value it contains, or directly via a property -- there is never a need to refer to it, and therefore no reason to model it.}
- Re: "contradicts typical understanding of dynamically-typed languages" -- So you claim.
- {It's true. Why are DynamicallyTyped languages sometimes described as "value typed"? Why do DynamicallyTyped languages lack type declarations on variables? Why do my examples -- at the bottom of the page -- not need to make any reference to the type of a variable? Why is it sufficient, in a DynamicallyTyped language, to both model and implement variables using precisely two operators: "void assign(variable_name, value)" and "value retrieve(variable_name)"? Why does language documentation typically contain phrases like, "[this language] is a dynamically typed language. There are no type definitions in the language; each value carries its own type. ... Variables have no predefined types; any variable may contain values of any type." (From http://www.lua.org/pil/2.html)}
- Replies continued at ValueExistenceProofTwo
{It proves values exist.}
I never disputed that, as a general rough notion. You should have known that. Words and meanings are in the head. If enough heads generally agree that a word means something similar, then it's safe to say it's a "commonly understood concept". HOWEVER, that by itself does not mean it's clear nor objective.
When we try to gel our models with "common usage", we tend to run into sticky choices such that if we use a given word for one purpose then we cannot use it for another similar-looking purpose (if we want a clear model). The side effect of such choices is awkward parts of both our models, such as "anonymous variable" (AV) in my model and "representation" in yours. The difference is that a user of my model doesn't have to see/use AV very often; it's mostly a side note. But in yours, "representation" would be used and seen all the time. I weigh tradeoffs such as frequency of use, simplicity, similarity to existing usage, etc. and feel that I've made a rational balance between such, knowing there is no free lunch. I don't see the rational reasoning in your approach so far, but rather what seems like an insistence that "that's the way it's supposed to be", as if quoting the Language Scriptures. My assumptions about programmer WetWare may indeed be wrong, but at least I state my working assumptions and show how my tradeoff decisions reflect those assumptions.
{Common usage does not necessarily suffer from the ambiguities you claim. Note that the descriptions at the top of TypeSystemCategoriesInImperativeLanguages employ "common usage" without ambiguity.}
See above near the TypesAndAssociations quote.
This and its associated pages is a particularly unsatisfying discussion. Maybe we could have a page called WhatTopWants? and restrict it to that page.
I want a model of types that is not full of shit and vagueness. The detached academics have controlled the type dialog for too long, and it's time to put some hot pepper sauce down their tenured panties to get them off their rusting chairs. Their wallet depends on future programmers coming to their expensive classes to learn types the hard way. They have a negative financial incentive to simplify type-related training. Their wallets stay fatter if students have to learn the hard way and thus they are biased to keep the old way in place. Money rots objectivity.
Please. Not a page called WhatTopDoesntWant?.
{Speaking as one of the detached academics, I should point out that understanding TypeSystem behaviour is an important part of first year for beginning programmers. The vast majority find it trivial and we quickly move on to more challenging and interesting topics. A few very weak beginners get lost trying to understand values, variables and types, but they're invariably so weak overall that lack of TypeSystem understanding is merely a tiny fragment of an overall picture of misunderstanding. They have as much difficulty with arrays, functions, loops, proper style and formatting, using command line interfaces, using GUIs, using IDEs, and so on. Odd models based on peculiar terminology are unlikely to be of any help, especially if said terminology is to be found nowhere else. By the way, I've never met an academic whose incentive was financial. There are far, far, far better ways to make money if that's the goal. Invariably, at an undergraduate level, our motivation is solely to share our skills and knowledge in a subject that we find utterly absorbing and compelling.}
Like I've said many times repeatedly in multiple places redundantly, programmers perform "good enough" with types using vague notions and pattern learning (experience). They don't need a concrete model to use them in a good-enough way. Most the programmers I know don't have any concrete models, just rules of thumb based on experience of what works and what doesn't. You talk to students, I talk to those in the actual field. Plus, most academic settings tend to focus on static languages, where type-ness is more predictable and rigid.
{You're inaccurate in at least two assertions, but that's immaterial. I can appreciate that you want to help weak programmers understand TypeSystems in DynamicallyTyped programming languages. I think your model is likely to cause more confusion than it cures, and it does weak programmers a disservice by teaching terminology they won't see anywhere else. You'd do far better to use your experience with weak programmers to develop a nice set of colourful illustrations, perhaps with cute jokes. Kind of like "Head First Java", but covering TypeSystems.}
They are not "weak", they are average, and sometimes above average[1]. But the academic-style descriptions of languages re types are often difficult to digest for some. If you prove the academic models are the only proper universal truth, then you may have a point (although a UsefulLie can still be a substitute). But so far they appear arbitrary, at least outside of implementation speed. You are a model bigot. DIFFERENT MODELS FIT DIFFERENT HEADS BETTER. You seem to be unwilling to accept that fact that your model is arbitrary, like a programmer who doesn't want to change languages because they are so familiar with the existing one.
{The "model" I've presented is simply how popular imperative programming languages are implemented. As such, it's certainly not arbitrary, because it reflects language reality. It's not a matter of being a "model bigot"; it's a matter of rejecting a "UsefulLie" when it appears there's no reasonable justification for a "UsefulLie". It's a matter of rejecting a "UsefulLie" because it doesn't appear to be useful. That means it's just a lie. If your "tag model" made understanding TypeSystems dramatically simpler and obviously easier, then the UsefulLie might be justified. It doesn't, so it isn't.}
A typical developer usually does not care about implementation. I've already stated my goals, their rank, and the reasoning behind the ranks, including why actual implementation is of low importance compared to other factors. As far as the utility of the tag model, your assessment of it as a UsefulLie is your personal conjecture based on your personal observations. That is not more powerful evidence than my personal conjecture based on my personal observations. In fact, my career path seems to have put me closer to actual developers in the field than you. You look down on them unless they attend your expensive classes. I try to help them, not punish them for not fitting my ideal. Further, you financially benefit from them learning your model, but it doesn't change my wallet either way. Thus, I can claim less potential financial bias. (I'm not claiming bias is actually happening, only that there's a conflict of interest in model recommendations.) If a reader doesn't want to use model X, they don't have to. Give them choice. You have not been elected Model Sheriff.
{I have no objection to alternative models in general. I have an objection to a specific model that (for example) uses contradictory or unfamiliar terminology (e.g., "anonymous constant", or "tag" when it would be more helpful to clarify existing terminology), or that confusingly defines variables as having types in DynamicallyTyped languages when one of well-known distinguishing characteristics of DynamicallyTyped languages is that variables do not have types.}
Your "Representation" is awkward also. There is no free lunch, like I described earlier, because colloquial usage is overloaded. The issue is where one takes the punches when terms are de-overloaded for a formal model. I take the punch in the ass where nobody sees it (but my doctor) and you take it on the face where everybody sees it.
{"Representation" makes it easy to explain low-level operators like +, -, is_numeric() in PHP, etc.}
- You are not a good representative specimen to test "easy" on.
- {Be that as it may, it does nothing to contradict my point.}
- Your claims of simplicity ("easy") without objective evidence of such? Any drooling idiot with an IQ above 40 can make such claims. Such claims are close to worthless without evidence.
- {How would you explain how a + operator works on floating point values or integers or strings, without referring to value representations?}
- There are multiple models for "works". My model doesn't need your version of "value" to explain it. As far as colloquial writing, one can talk about the "result of an expression" instead of "value of an expression". Thus, "the result of expression X".
- {I don't think there are many models for how (say) signed 16 bit addition works, but feel free to prove me wrong. Nothing obligates you to use my version of "value", but it's inevitably going to be pretty similar if it's going to model "value" semantics in popular imperative programming languages. Isn't "result", at best, just a less-technical synonym for "value"?}
- I question the wide presence of a clear canonical semantic model, as described elsewhere. And yes, it may be a synonym, but so is the awkward "representation" (for certain overload paths). Like I keep saying, trying to shoehorn vague colloquial terms into a rigorous model is going to create oddities. It's far easier to map specifics into fuzz than fuzz into specifics. -t
- {Do you feel there are widely varying interpretations of, say, the semantics of "assignment to a variable" or the semantics of "invocation of a function"?}
- I cannot read people's minds. I used to assume people thought about things similar to how I do, but over the years I realize that's a false assumption. While it's hard to extract the details of various head models (most don't like probing interviews about such), the spoken language used to describe such suggests there is wide variation in the population of programmers. I no longer primarily strive for a "best fit" model in terms of common WetWare, instead focusing on something that uses communication methods widely known among programmers: XML, relational tables, and imperative code. It may rub against some people's WetWare and may be a great fit for others. It's nice to have choices. -t
In the second allegation, I believe most would agree that your model's variables "have" types. Something being nested rarely de-has-ifies nouns in the real world, so why should things be diff in your description? Your usage/interpretation of "has" is unnatural.
{Working through examples of expression and statement evaluation, most would quickly discover that there is never a reference to a variable's type, regardless how "variable's type" is determined. If variables have an explicit "type" property, it will never be used. If a "variable's type" can be determined transitively, it will never be used. Therefore, there's no reason for variables to "have" types, and certainly no reason to model it.}
It does depend on the language; each language/operator can do it different. Some look at the tag, some parse. The only way to know for sure is empirical testing. Possibly related: SignaturesAndSoftPolymorphism. -t
{In every popular imperative DynamicallyTyped programming language, it is strictly values to which the interpreter will "look at the tag" -- i.e., examine the type reference property of the value -- or "parse" its representation. I know of no popular imperative DynamicallyTyped programming language in which variables have a "type" property. All popular imperative DynamicallyTyped programming languages have variables without a "type" property and values with a "type" property.}
Values have not been proven to objectively exist outside of specific models.
{Considering they're fundamental to applied ComputerScience and VonNeumannArchitecture, I don't think we have to prove the obvious: They exist in every imperative programming language that has functions, variables, expressions, literals and types. That is what we're modelling, isn't it?}
So you claim. And most the items you mention objectively exist in the grammar of a language. Values don't. See PageAnchor literal_compare_2.
{Grammar is only one aspect of a language. Semantics are equally fundamental. Without semantics, in most languages you can compose an infinite number of syntactically-correct but un-runnable programs, which proves that semantics are fundamental. Note that every (useful) imperative programming language has I/O. Where does I/O objectively exist in the grammar of, say, C# or Java?}
Yes, semantics are useful, but that does not mean they are objective. There are lots of UsefulLies that humans rely on as communication tools even though they are not rigorously defined. That's life. As far as the objectivity of I/O, search for 'agree on what "output" is (luckily)'.
{Semantics are more than useful; without them a language is just a grammar. A grammar only defines correct syntax. With just a grammar, we could write syntactically correct programs but they couldn't do anything because we couldn't write an interpreter. We could write a parser, but once the parser has parsed our syntactically-correct source code, without semantics we'd have no idea what to do with the AbstractSyntaxTree. Semantics tell us what to do with the nodes in the tree. Semantics are most certainly objective -- they describe precisely what the language statements do. Ultimately, they describe precisely what is done to the real values in the real memory locations on real machines, though we usually employ several levels of indirection so we don't have to jump from language statements to bit-twiddling on specific machine architectures.}
Okay, I'll agree that "semantics are required for the tool to be useful". But that's not the same as clarity or singularity being required. There are multiple models for "do". There are multiple different models that can transform the "I" to the "O" with regard to I/O. You have not proven that your model of "value" is required for all possible accurate models. That proof is your job, not mine. You also keep flipping between 1) suggesting your model is the best fit for current WetWare (common notions) even IF there are alternatives, and 2) your "value" model is the only valid model (or a required part of all accurate models). Please be clear about which position you defending. -t
{If you claim to have an alternative to conventional semantics, then you're implicitly claiming that you have alternative (simpler? faster? better?) ways to build interpreters. That's what "multiple models for 'do'" implies, whether you're interested in interpreter/compiler implementation or not. If so, can you show a simpler, clearer, whatever way -- than that shown later on this page -- to implement a canonical popular imperative programming language statement like "a = b + foo() * (bar() + 3);"?}
Again again again again again again again again, you have NOT established a canonical "conventional semantics". You just THINK you have because you are stubborn by nature. And again again again, I don't claim my model is objectively simpler. It's designed use to communication and modelling techniques that programmers have to use as part of their job, reducing communication problems. I don't have to borrow obscure gobbledygook from academic books that few read or like. Further, it's designed to optimize human grokking, not computing resources. It would would probably be slow and memory-hungry if actually implemented.
{I don't know what "obscure gobbledygook" you're referring to -- any and all terminology I've employed is used in precisely the manner it's used in typical language documentation, introductory ComputerScience textbooks (which are hardly academic), and on-line popular and recognised sources like Wikipedia. I've never mentioned computing resources, so I'm not sure why that matters, nor do I expect anyone to implement a language using XML as an internal representation so "slow and memory-hungry" is irrelevant. It's already quite clear that you're using XML as an illustration because you feel it will be more familiar to your target audience. I have no significant objection to that. I object particularly to the following:}
You are only claiming that. Take some snippets and dissect them word-for-word to prove your claim.
{I'm only claiming what?}
Re: "all terminology I've employed is used in precisely the manner it's used in typical language..."
{Type, value, literal, variable, representation, operator, operand, argument and parameter are all terms I've used. I have used all in a conventional manner. If you believe otherwise, please indicate where I have done so.}
I held my Conventional-Manner-O-Meter up to your model, and it gave it a low score of 2.3. If you cannot afford this device, too bad. I'll sell you one for $699 plus tax. It's a good thing these devices exist, otherwise we'd be stuck with annoying repetitious personal anecdotes.
{Your facetious response does nothing to contradict what I wrote.}
You have NOT verified your "precisely" claim. It's just a claim. I claim snarboos are gloffniks, and that makes it so!
{I invite you to find examples where my use of type, value, literal, variable, representation, operator, operand, argument and parameter differ significantly and conceptually from that in recognised sources.}
I can't falsify matches against vague terms/descriptions, but that doesn't true-ify them either. Prove Nibbobs are not Truglumps! And this "recognized" thing smells suspicious.
{What's suspicious about using, say, Petzold's "Code: The Hidden Language of Computer Hardware and Software" as a reference? It's respected and well-known, i.e., "recognised". All the terms I've mentioned have established definitions. What makes you think they're vague?}
- {Modelling DynamicallyTyped languages with variables that have an explicit "type" property. That only causes confusion, particularly by conflating StaticallyTyped languages with DynamicallyTyped languages.}
- How are you measuring this "confusion"? You are not a good test specimen because you are ingrained with one particular model.
- {You don't think it's not confusing to claim that variables have types in languages where variables don't have types?}
- They "have" types in your model ALSO. I reject your de-has-ification due to nesting as already explained.
- {You appear to be ignoring the fact that DynamicallyTyped languages never make reference to a variable's type. In DynamicallyTyped languages, outside of assignment, all references to a variable evaluate to the value it contains. It is the type of the value that is referenced, not some derived type of the variable. When a variable is passed as an argument to an operator invocation like typeName(v) where v is a variable, the operator only "sees" the value in the v, not the v itself. Outside of assignment, whenever a variable name is used, the interpreter retrieves the value the variable contains. The value is then passed to the appropriate operator as an argument. It's the same process whether typeName()'s argument is a variable, a literal, or a function call. In every case, the interpreter simply evaluates the expression to obtain a value; it doesn't matter whether the expression consists of a variable, a literal, a function call, or a combination of these. Otherwise, there'd have to be one typeName() implementation for variables, one typeName() implementation for literals, one typeName() implementation for function calls, and one typeName() implementation for expressions consisting of a combination of operators, literals, and variable references. Obviously, that would be unreasonable, and it's unnecessary. Hence, despite the fact that variables can be conceptually regarded as transitively "having" a type because a variable has a value that has a type, nothing in an interpreter will ever make reference to it. Therefore, it is irrelevant to any model, and so need not be modelled.}
- I'm not sure I'm following. YOUR reference model has an attribute called "type=" in the XML version (or T in tuple version). Why have that attribute if nothing references it? And typeName (or equiv.) DOES need to reference it SOMEHOW, even if it has to peel 50 layers of your convoluted onion to get to the "type" part. typeName() can NOT run properly using ONLY your "representation" attribute (unless you stuff it with BASIC-like type markers, which is not documented.)
- A variable is only a container for a value. Values have the type attribute, and it is referenced. Variables do not have a type attribute, because it will never be referenced. The only thing typeName ever references is the type attribute of a value.
- So you claim.
- But it's true. Dissect the semantics of popular imperative programming languages, and you'll see that every reference to a variable -- outside of assignment -- retrieves its value. Otherwise, given a variable p, you'd need one implementation of typeName() to handle variables (e.g., typeName(p)) and another for expressions like typeName(3 + 4), because an expression isn't a variable.
- The only way to "dissect the semantics" is pop open a human head. To me, "semantics" means a human idea in a human mind. It does not mean anything concrete outside of the human mind. There may be representations of mind ideas (including diff representations of roughly the same "idea"), but that's not the same as being the idea. You are not saying anything that means anything clear to me. You might as well be talking through a busted Burger King speaker to Charlie Brown. "Wah Wah, waaaaah, wah...".
- [Your ignorance is a good deal of your problem. Semantics, in general, is not about human ideas in human minds. Certainly, the semantics of human languages are about the mappings between the words humans see or hear (or even feel) and the meanings they give those words in their minds. But we aren't talking about human languages, we are talking about computer languages. Those semantics exist in the language definitions and the language implementations. Neither of those occur in the human head. (The former occurs "on paper" and the latter occurs in the computers.]
- You are ignorant of science. I don't dispute vocabulary issues in "language definitions". Those are clearly-defined, at least in the scope of that particular language. That's because parsing rules have to be clear enough to be processable by machines. You can't use fuzzy bullshit on (most) machines because they can't be bribed or threatened or socially pressured to "work". By "values" are usually not part of the syntax definitions. And I've already addressed the "implementation" issue many times and won't repeat it yet yet yet again. Semantics is a head thing. SEMANTICS DO NOT EXIST OUTSIDE OF HUMAN HEADS. Representations (projections) of semantics may exist outside, but those are not semantics themselves. They are affected by semantics, but are not semantics. TheMapIsNotTheTerritory. Stop abusing English! You are wrooooonnnngggg!
- {Again, you appear to be conflating the semantics of human language with the semantics of programming languages. The former is about "human heads". The latter is about the effects that executing programming language statements have on the state of computational machines. You are correct that "values" are not part of syntax definitions, but syntax is only half of a complete programming language. The other half is programming language semantics, without which a language cannot function. Values are fundamental part of programming language semantics.}
- [Shouting out on untruth doesn't make it truer. Semantics is the mapping of language structures to their meanings. This is true regardless of where that mapping occurs. For human languages, this mapping does indeed occur in our heads. For computer languages, this mapping occurs in an implementation of that language. Those occur in the computer, not human heads.]
- This is getting really bazaar. "Semantics" to me means human conceptual "meaning". It's not usually associated with computer processes (outside of AI). We call the process that transform input into output an "algorithm" or "formula", not "semantics". (And there are multiple fitting algorithms in this case). If you wish to argue that all possible matching algorithms must have a part called "value", then say so. Further, what does that have to do with fitting programmer head notions? Why should I or they care if some algorithm technically must have Part X in it? It may be a triviality victory on your part, but it doesn't do the target audience any good. It's like saying "all cars must conform to the 2rd law of thermodynamics". That may be true, but it won't noticeably make for better car drivers.
- {What's "bazaar" about it? Do you mean "bizarre"? It appears you were only aware of "semantics" in the context of human language, and erroneously applied that interpretation to programming language semantics. It's an easy mistake to make, but even easier to correct now that you understand the distinction between human language semantics and programming language semantics. As for values being necessary, yes, they are. Otherwise, you are forced to provide awkward workarounds in order to answer questions like what does a function return? Or, what does a variable contain? Or, what does an expression evaluate to? There are three fundamental, inescapable components of the semantics of popular imperative programming languages: types, values, and variables. Language categories -- as documented on TypeSystemCategoriesInImperativeLanguages -- derive from the relationships between these foundational components.}
- This discussion is a bizarre bazaar. Semantics means something different in IT? No. You are making up English rules again based on your slanted habits in interpreting vague books. (Traditional) computers process algorithms, NOT "semantics".
- [That is correct, "semantics" doesn't mean something different for human languages and for computer languages. Whatever type of language you are talking about, "semantics" is the mapping of the language constructs to their meanings. Notice that this definition makes absolutely no mention at all about where this mapping occurs. It's simply irrelevant. Human languages (note the adjective "human") are human languages because they are the ones we use to communicate with each other. In order to do this, we must be able to map the language constructs to their meanings in our heads. Computer languages (note the adjective "computer") are computer languages because they tell the computer what steps to perform. As for your algorithm remark, notice that in order to execute an algorithm, the computer has to have that algorithm expressed in a language it understands. In order to understand it, it has to be able to map the language constructs to their meanings. I.e. the semantics of the language the algorithm is written in must exist in the computer.]
- The language a computer "understands" is low-level machine language. The interpreter just follows instructions blindly, it does not see "meaning". Plus, there are multiple ways to map the same input and source code to same output; there is no "canonical" way to do it. One can write a matching interpreter in Fortran, COBOL, and BrainFsck, and the machine instructions will be different for each one. Thus, even if we DO call that processing "semantics" (cough), there is not a singular way to define these "semantics". The qualifying paths are all over the map.
- {The language the computer hardware understands is machine language. We can use machine language to build a virtual machine, which understands a higher-level bytecode. We can use that to build an interpreter run-time, and on top of that we can run a high-level language. In each case, a set of language instructions has a direct effect on the environment in which they run. That effect, that "meaning", is what we call "semantics". There are canonical semantics within language categories. For example, in every popular imperative programming language, we recognise that a statement like "a = foo() + b;" means "invoke operator 'foo', add its return value to the value in variable b, and store the resulting value in variable 'a'". We might simplify it to "invoke 'foo' and add it to 'b' and put the result in 'a'", or even "store 'foo' plus 'b' in 'a'", but these all "mean" the same thing -- to us and to the machine -- regardless what descriptive shortcuts we take. What it doesn't mean is "shift left and dim the lights", or anything else. It's strong evidence of consistent semantics across popular imperative programming languages.}
- [Furthermore, a necessary step in following an instruction blindly is understanding the instruction. If I tell you to "Florgh cista biaojks", can you do that without knowing (on some level) what "Florgh cista biaojks" means? Well, neither can a computer. The fact that there are multiple ways to do things is irrelevant. The fact that there are multiple languages is irrelevant. Uniqueness of these things is neither required nor desired (nor even possible).]
- Without something to scientifically measure, this is either conjecture or word-play, or perhaps both. Computers (used for interpreters) do NOT "understand"; they just blindly follow instructions. You are not saying anything concrete or verifiable or falsifiable above. Computers don't think of concepts like "invoke" the way humans do (and humans may not do it consistently either). To describe how computers allegedly think by using English is an awkward pairing. Computers are not human nor do they process English (for non-AI). What's the point of all this anyhow before we go deeper into Alice's rabbit hole? Again, I think you guys are mistakenly projecting your personal thought patterns into machines and/or other developers. Long-term exposure to a certain model flavor has blinded you to the arbitrariness of it all. If you lived in London all your life and then move to Paris later, you'll tend to view Paris with respect to London.
- [Nice straw man. Nobody said that computers think like humans. Nobody said that computers understand English. Nobody has projected their patterns on to machines or other developers. Finally, you can scientifically measure it. Try running an interpreted language program of your choice on a computer that doesn't have an interpreter for it. Then try it on a computer that does. The computer's failure to run your program in the former case and it's success in the latter is all the evidence needed to show that the semantics for that language are indeed in the computer.]
- You DID pretend like the computer understands English above. There's no goddam strawmanning going on here! Remember: '"a = foo() + b;" means "invoke operator 'foo', add its return value to...all "mean" the same thing -- to us and to the machine'. That's YOUR guy's writing. And the failure scenario describes "algorithm", not "semantics". You are using the wrong term.
- [Not only did I not say that, but it doesn't look to me like someone pretending that the computer understands English. It looks like someone using English to describe what a computer does. BTW, I did use the correct term.]
- I interpreted 'all "mean" the same thing -- to us and to the machine' to imply that computers think with or using English. Either way, there's no way to verify with certainty (or anything close) that your English description is accurate and/or unique (canonical or always required to forecast correctly.). You guys MUST find a way to quantify this "machine semantics" to make any scientifically testable hypotheses about it. Further, you need to quantify human semantics of these things if you claim that they match the ideas in the machine's head (cough).
- [Why would you think that? The language construct displayed wasn't English. In any case, you can verify that the description is correct by sampling common dynamic programming languages, looking at how they define that language construct, and comparing the results. (You could even use that to quantify, if you so desired, the results.)]
- We've been down this road before. "Values" are not an official part of most language descriptions/specifications because they are not (directly) in source code, and you see certain words, such as "abstract idea" and interpret them completely different than how normal people would (or at least how I would, I don't have a formal survey). Authors are lazy (or "economical" may be more fitting). They don't put any more into the written description or clarify more than they have to. They describe the formal parts that have to be described, and describe the rest in vague notion-y language that's good enough, compared to competitors. Deep thought or detailed description is not given outside of the required elements. Much is not intended to be rigorous. And, some rigorous models are not necessarily meant to be canonical, but rather are a UsefulLie model to "explain" something. Alternative models may also do the job. They don't claim uniqueness.
- [Your response doesn't appear to have anything to do with what you're responding to.]
- I guess I am not understanding your statement.
- [We were talking about your mistaken belief that we had claimed, "Computers think with or using English." We had branched out a bit on how to verify if our description was correct. You responded with a rant about "values", rigor, and uniqueness. I don't see a connection.]
- {I don't see the connection either. Whether values are "official" or not in language documentation (what does "official" mean?), they certainly appear, and what is language documentation if not a description of both syntax and semantics? I skimmed language documentation for PHP, Python, and Ruby. Reference to "value" or "values" appears frequently in all three, and is consistent with the way I've used the term.}
- That's because it's an overloaded term that can cover multiple variations on the "theme" of "values". Like I said elsewhere, a side-effect of such overloading is that when one tries to apply it to more rigorous models, it's often not practical to use the 2+ versions of the overloaded term, and one is faced with tradeoff choices. I explained the reasoning behind the trade-offs I've made, and as usual, you disagreed, and it cannot be settled without better science on actual programmer WetWare, and thus we are stuck at an AnecdoteImpasse. And I also gave real-world instances where other variations of "value" are found, such as user input forms, and "V" in CSV files. "Value" is sloppily used for multiple/wide purposes in colloquial-land. That's just the way it is.
- {Regardless what informal uses there may be for "value", it can be rigorously used in a model merely by clarifying the definition you're using. Similarly, "function" is often used in a loose manner to mean "an operator with a return value", but if you mean "a mathematical function", it's sufficient to say so.}
- [Agreed. It's a flawed argument to claim a term is vague just because it's overloaded. It's also a mistake to apply a particular definition of an overloaded term to all uses. Not fuzzy, not a trade-off, just a mistake by the one applying the wrong definition.]
- How can one do the second ("all uses")? It's the same word.
- {Indeed. No benefit comes from avoiding "overloading" by renaming (for example) "function" to "named calculator" or some such, especially with neither a definition nor a statement to the effect that you'll use "named calculator" to mean "a mathematical function". Yet, Top, that's exactly what you've been doing with your use of "tag", "anonymous variable", "hidden variable", and so on.}
- It's not "wrong". I'll avoid the discussion about whether overloaded makes something "vague". Sounds like a pages of LaynesLaw battles in the making. You use "representation" when most people would use "value" if given a choice. However, you used "value" for another purpose, and so it would be unwise to use it in both places because it would confuse the reader. Thus, YOU made a tradeoff. And I don't want the so-called nested structure (value inside of variable). It adds confusion and complicates the primary data structure representation mechanism of the model. My trade-off decisions are tied to and consistent with my assumptions about the target audience WetWare. If you want to change my mind, you would need to challenge my assumptions, not computations based on those assumptions. And you would need more than your personal anecdotes. Otherwise let the AnecdoteImpasse stand and stop mucking with a stuck horse. If you claim "value" has a clear-cut canonical definition, you need to establish that beyond personal interpretations of vague English passages. Please start another topic, such as ProofValueIsClear? or something. This topic is already too long.
- [It's no wonder you have a hard time understanding things, since you think that using the wrong definition isn't wrong.]
- I don't know what you are talking about. I don't see any clear-cut wrong-ness on my part. Just you forcing your brand of interpretation as a universal truth. Please clarify.
- {There are standard definitions in ComputerScience and SoftwareEngineering. You mainly don't use them, but when you do, you use them incorrectly. For example, your use of "anonymous variable" to refer to a value.}
- Like I keep keep keep saying, some are only semi-standard and overloaded, and so we need to make tradeoffs for a more formal model, leading to oddities. This also goes for your awkward "representation".
- {Like I keep saying, if a term is overloaded, simply make clear what version of it you're using. That addresses all the apparent problems with using overloaded terminology, and causes none of the problems with coining new terminology. As for a "more formal model", the best solution is to use a formal notation -- which does not mean a programming language with side-effects, because formal provability of such is enormously difficult -- because that eliminates the ambiguity of typical human language.}
- I do try to make it clear, but English is limited and so I try to show instead of just tell. And suitable "formal notation" does not exist yet for the target audience. Your try failed. Thus, I use imperative algorithms and XML, something they have to know to get a programming job. And "formal provability" is not one of my primary goals because regular programmers don't care about that. If you find a way to make it trivial and approachable, I'll re-evaluate, but I won't bet on that.
- {My "try" that apparently "failed" was not intended for your target audience but for a WardsWiki target audience. I think it's rather condescending, bordering on insulting, to assume that the former is not as capable as the latter. I think it's especially condescending to assume that your target audience will be confused by clarifying "value" or "type reference", but will immediately understand "anonymous variable" or "tag". Or, to assume that XML will be clear and some simpler notation will not.}
- If you are not targeting regular developers, then why complain about a model that does? You don't even seem to have much contact with regular developers, being that you have odd assumptions about how they think and what they think about.
- {I am in almost continuous contact with "regular developers", and formerly had a number of them working for me. I still work with "regular developers" on a daily basis. Your continuous deprecation of their abilities says more about you than it does about them.}
- PageAnchor wetware_ban_7
- I didn't deprecate their abilities, YOU did. I don't expect them to know arcane notations to do their programming job. That doesn't make them dumb; that just means they don't meet YOUR expectations that they should know such notations. Anyhow, I'm satisfied to LetTheReaderDecide if their needs or personnel fit the profiles stated by both sides. I don't want to bicker that point anymore; we just go around in circles replying the same replies over again in a grand AnecdoteImpasse. Repetition may work on your mother, but not me. You got that? No more bickering about the nature of typical developer WetWare unless you have new evidence. And I suspect your academic connections result in you more likely to encounter academic-centric developers. You may not even be aware if this "life filter", and perhaps that applies to my life also. I only report what I observe. I encounter a lot of what I call "street programmers" of marginal educational background. "Can you read and fix our shop's code" is the primary hiring criteria (and not piss people off). The people making the hiring decisions usually do NOT have a CS degree such that "Can you read and fix our shop's code" is their key ruler. If my personality or life choices tilt my observations or encounters a certain way, I'm not aware if it, but that can happen to anybody.
- {That's overtly insulting. E.g., "'street programmers' of marginal educational background". Your tacit assumption appears to be that your "street programmers" (whatever they are) aren't capable of understanding anything more abstract than XML. That borders on offensive. If you shared this notion with your colleagues, how many of them would become angry?}
- I don't know if they are capable. It's more about their past exposure to such styles of documentation rather than their ability. I'm trying to leverage their existing experience to make digesting the model easier. It's an economic decision rather than a declaration of their mental abilities.
- {If you "don't know if they are capable", why are you insisting on a model that assumes they are not?}
- It's my estimate it would take many of them longer to grok your approach (assuming it's even clear, which I'm not ready to agree) even IF they are capable. Thus capable versus not capable is not really the issue; it's the AVERAGE grokking time that matters the most. And yes, it is an only estimate, and yes you'll probably disagree with my estimate; but without real surveys, it's only an AnecdoteImpasse.
- {I think it would be interesting to revisit this after you've had a chance to demonstrate your model to your colleagues. Let us know how that goes.}
- {Use of PrivateLanguage terminology like "hidden variable", or "anonymous constant" or "tag". That will only result in confusion. For the sake of simplicity and reducing the possibility of confusion, it is always better to clarify existing terminology (or disambiguate your use of it) than introduce new terminology.}
- You should be able to predict my reply to this by now. I've explained like 15 times. Hint: "tradeoff".
- {If you're going to allege "tradeoffs", then you should prove their validity. You have not done so.}
- You haven't proved your choices are objectively better either. We have an AnecdoteImpasse on the target audience WetWare. Further bickering over what regular programmers think about is probably pointless without real studies and survey field work. Stating your view of their WetWare repeatedly is not going to convince me, and vice verse with my assessments of WetWare.
- {I find it unlikely that undefined, novel terminology like "hidden variable", "anonymous constant" or "tag" could possibly be less confusing than defined, familiar terminology like "value" or "type".}
- Tradeoffs. "Representation" is also ugly, and a two-layer structure complicates the primary part of the model.
- {As I've said before, if you're going to allege "tradeoffs", then you should prove their validity, or at least demonstrate them. Isn't considering "representation" to be "ugly" an aesthetic judgement that has no role in what should be a logical model? Likewise, isn't it illogical to avoid describing a variable as what it is -- a container for a value, or a time-varying association between a name and a value -- because of an aesthetic concern over apparent (and unproven) complexity?}
- Double standard. You haven't scientifically proven your characterization of "value" either.
- {I don't have to -- it's documented in innumerable ComputerScience and computer architecture textbooks. Please answer the questions.}
- Bull. You hallucinate stuff that's not in there, and you hallucinate about some canonical semantics that does not objectively exist. I'm not going to argue with hallucinators anymore; it's pointless.
- {I'll ignore your AdHominem attack. Have you checked ComputerScience and computer architecture textbooks?}
- I'm targeting typical programmers. Most don't know or care how interpreters are built. IIFFF there is a clear definition buried in those books, typical programmers will NOT have seen and/or remembered it and thus will NOT take their "semantics" (cough) from them. Thus, it's a moot point even if those sources had the golden goblet definitions we seek. Further, the architecture is built around MACHINE EFFICIENCY, not necessarily matching human notions.''
Foot Notes
[1] Fully understanding the guts of interpreters or computers is only part what makes good programmers good. Understanding team WetWare, getting along with people, and understanding user and org needs and WetWare, and having discipline etc. also count.
Note that the phrase I used was "weak programmers", not "weak developers". A weak programmer can be a good developer, for precisely the reasons you mention.
My point is that being slow with academic-centric materials does not make them useless or unproductive as programmers/developers. They may not be the best person to answer the most technical of questions or tricky real-world puzzles, but that's only part of the job. Your implication was that if they don't absorb your pet materials, they should find a different career. It smells of StuckOnPetFactors. Some of the best teams I've been in have a verity of minds such that each collaborate and contribute based on his/her personal strengths. It's comparable to a basketball team where variety of talents work together to target the other team's weak-spots. You have short but quick guys (point guards), and slow but muscled giants to grab rebounds and defend close to the basket. You also have those who are not so athletic nor tall, but are dead accurate shooters from the outside. And you have skilled defenders who are sub-par shooters, but can stop the best guy on the other team. On rare occasions some players can fill two or three of these roles, and are the superstars of the sport. But they are also expensive and rare.
My descriptions are not "pet materials", they're simply how popular imperative programming languages actually work, are typically documented, and are generally understood by strong programmers. If your model was obviously simpler, clearer, and more effective than conventional explanations of types, variables and values, then it would be worthwhile. I see no evidence that it's obviously simpler, clearer, or more effective than conventional explanations. What would unquestionably benefit weak programmers is better-written conventional explanations, not whole new models that differ from conventional explanations without being any simpler or easier to understand.
I've already discussed why actual implementation is not relevant and I won't reinvent those arguments here. I believe them to be sound and rational, and if you disagree, so be it. Some debates are never settled despite 200 repeats. I've had political debates like that with relatives. Please stop bringing it up unless you enjoy repetition. And being a "clearer" model may depend on WetWare. Obviously it's a poor fit for you. Even if we ignore "simplicity", my model takes a very different angle such that if one presentation approach fails for an individual, perhaps another will work. Some believe intelligence level is "linear" in that either one is smart or stupid or something in between on a linear continuum. You hint at such a view when you divide programmers into "strong" and "weak". This is a poor model of intelligence in my experience because some people just need input (lessons) in the right format to click for them. I came from a family of professional artists and am a visual thinker. I like visual and "mechanical" models. Many in IT are also "linguistic thinkers"; they like to mentally process symbols and relationships between symbols. That's an over-simplification, but it's a rough pattern I see among techies, and the visualites often don't see eye-to-eye with the linguites. My brother is also a visualite. He started out in electronics engineering in school, but switched to mechanical engineering because he had trouble relating to the non-physical nature of electricity and its "odd" math. I was really good at math problems that had a (known) visual counterpart, but clumsier at those which didn't. (There are also "social thinkers", who are more often in sales. The best sales people are "social rocket scientists", one could say.) -t
I accept that maybe a completely different model will work for some people. My problem with your model is that it isn't completely different. It's almost the same as the conventional explanations, differing only in small (but important, maybe even crucial) details. That can only lead to confusion.
Only if they are first indoctrinated with your style of model. A side note can show the nested XML version. I don't want to keep repeating the nested version; it's bloat:
State 1: Sample sample blah blah blah
.
. <var name="foo" type="number" value="123">
.
State 2: Sample sample blah blah blah
.
. <var name="foo" type="string" value="123">
.
State 3: Sample sample blah blah blah
.
. <var name="foo" type="string" value="moof">
.
State 4: Sample sample blah blah blah MultiValued // [not added by -top]
.
. <var name="foo" type="string">
. <value="boof">
. <value="woof">
. <value="doof">
. <value="moof">
. </var>
. // You ruined my example, guys. This was NOT about arrays. --top
. // SEE toward the bottom for a restoration of the original. --top
. // By the way '<value="moof">', that's not valid XML.
Versus:
State 1: Sample sample blah blah blah
.
. <var name="foo">
. <value type="number" representation="123">
. </var>
.
State 2: Sample sample blah blah blah
.
. <var name="foo">
. <value type="string" representation="123">
. </var>
.
State 3: Sample sample blah blah blah
.
. <var name="foo">
. <value type="string" representation="moof">
. </var>
.
State 4: Heterogeneous MultiValues
[ I did NOT add this 4 thing. I don't know what it's for or why it's here. --top]
[ "State" was meant as a sample step, not "example number". --top]
[ Please be careful about attribution. --top]
[ And a good many arrays don't allow mixing explicit types, which ]
[ below is poorly factored for. --top]
[ A good many do allow array elements of mixed types, for which the following is entirely reasonable. ]
{ I hate to do this, but in fairness to those who do not like heterogeneous arrays, I have added a Stage 4 homogeneous array above.]
.
. <var name="foo">
. <value type="number" representation="123">
. <value type="string" representation="123">
. <value type="string" representation="moof">
. </var>
.
(There is no state 4 for the first although it can represent homogeneous MultiValues)
(Dots to work around wiki formatting glitch.)
The first is much less clutter and easier to read to my WetWare. I optimized the structure for the documentation and usage in the model kit, NOT some future purpose (YagNi). Most readers won't give a shit about actual interpreter building. Why should I burden 95% for the sake of 5%? That doesn't seem rational to me. Spock kicks you. I suspect you are again going to argue for formula/tuple notation instead to "solve" the bloat. But I won't repeat that fight here. I'm sticking with XML and if you don't like it, suffer in the twisted anguish you deserve for being stubborn and detached from the field.
But variables in DynamicallyTyped languages don't need types and don't benefit from having them. Try working through execution of the statement "a = b + foo() * (bar() + 3);" using both approaches. Assume foo() and bar() return numeric values and b contains the string "Result: ".
- I agree dynamic languages don't "need" types, but that's a language design decision, which is off-topic. One could argue that tag-based polymorphism is quicker than parse-based polymorphism though from a performance stand-point.
- That's not what I wrote. Dynamic languages do not have variables with types. They have values with various types (Category D1), or (Category D2) a single (string) type.
- Bull! That's specific to your model only, not a universal objectively verifiable truth. And nesting does not de-have-atize "types". A jury of peers would likely agree that nesting does not de-have-atize inanimate objects. I'd bet money on it. We've been over this already.
- Working through examples of expression and statement evaluation, a "jury of peers" would quickly discover -- as is demonstrated at the bottom of this page -- that there is never a reference to a variable's type, regardless whether a "variable's type" can be determined directly from a property or transitively from the value it contains. The type of a value is frequently used, but the type of a variable is never used. Therefore, if variables are given an explicit "type" property, it will never be used. If a "variable's type" can be determined transitively, it will never be used. Therefore, there's no reason for variables to "have" types, and certainly no reason to model it.
- What is "below"? A specific execution model? It's one model among multiple possibilities, not a universal truth. Colloquially, programmers OFTEN refer to a "variable's type". Stop mistaking your pet head model for universal truth.
- No, the below is not a "universal truth", but I'm not sure "universal truth" is a phrase that applies here. What's below is a demonstration of expression evaluation and variable assignment based on popular imperative programming language semantics. I'm not aware of any popular imperative programming language that deviates from it. If you feel your model is simpler, clearer, produces exactly the same results, and requires variables to have a "type" property, I encourage you to demonstrate it on my "a = b + foo() * (bar() + 3);" example -- following the same style I've used, in order that we may compare the two approaches on an equal footing.
- That's YOUR representation of "semantics". It's not anything objectively tied to objectively observable properties of interpreters as programmers can see them. And why must it match your model? That's a rigged constraint. Note that something resembling hidden variables could fill a similar role to your "values". If modelling a full interpreter as a lab-toy, I'd probably just use an "object" structure in which attributes such as "name", "type-tag", "value", "isReadOnly" (constant), etc. can be used or NOT used as needed. It would be a dynamic structure. And I wouldn't have to classify anything beyond "object". If you need an object to have attribute X, then you simply add and use attribute X. Simple simple. I wouldn't have to get my classification panties in a knot like you love to do. The trickier decisions would be the use of containment/nesting, such as 1-to-N relationships. -t
- That's not my "model" but a conventional view of semantics in popular imperative programming languages. If you reject it, an overwhelming majority of programmers will simply note that you're being wrong.
- Most don't use or care about your notation. The better-selling books use AbstractSyntaxTree's to represent expression reduction, I would note. The intermediate parts would then be called "nodes" or "leafs".
- I think you've misunderstood something. My "notation" is deliberately equivalent to what I understand you've attempted on TopsTagModelTwo. I've used imperative pseudo-code to illustrate, in a step-by-step fashion, exactly what an interpreter does to process language statements assuming they've been parsed already. In other words, I've done the "interpreter modelling" that I recall you mentioned before. What I've shown is what would would be the result of traversing an AbstractSyntaxTree, performing the appropriate operation at each node. Perhaps some comments in the code would help clarify it:
// Illustration of a = b + foo() * (bar() + 3);
// execute bar(); the returned value is v1
v1 ← invoke('bar')
// convert the literal "3" in the source code to value v2
v2 ← convert_numeric_literal_to_value("3")
// add v1 and v2; the result is v3
v3 ← add(v1, v2) // use numeric addition for +, because v1 and v2 are numeric (i.e., v1(t) = v2(t) = numeric).
// execute foo(); the returned value is v4
v4 ← invoke('foo')
// multiply v4 and v3; the result is v5
v5 ← multiply(v4, v3)
// put the value in variable 'b' in v6
v6 ← retrieve_from_variable('b')
// convert numeric value v5 to string value v7
v7 ← convert_to_string(v5) // use concatenation for + because v6 is a string (i.e., v6(t) = string), but first convert v5 to a string
// concatenate v7 onto v6; the result is v8
v8 ← concatenate(v6, v7)
// store value v8 in variable 'a'
assign_to_variable('a', v8)
.
- The named values are for illustration, of course. In an implementation, they don't have names -- they're often just pushed to and popped from a stack.
- See comments below Example 376.
- Yes, it's true that programmers often use the phrase "a variable's type". There are two reasons for that: 1. In statically-typed languages, every variable does have a type. 2. In DynamicallyTyped languages, variables don't have types, but values have types, so "a variable's type" doesn't refer to some property of a variable. It's simply shorthand way to refer to the fact that the appearance of a variable in an expression always evaluates to its value, and its value has a type.
- In dynamic languages, one can use operators such as Php's getType() to determine a "variable's type". One doesn't have to go through some middle-man structure that resembles your nested model. No objectively observable feature of typical dynamic languages needs the middle-man structure. Your model's choice to use it is arbitrary (or perhaps tied to efficiency or to leverage existing interpreter building kits). Granted, one may be able to use getType() on an expression, but there are different ways to model what's happening under the hood to produce that.
- getType($x) doesn't do anything with $x, and it's no different from getType(foo()) or getType(3) or getType(3 + foo()). In all cases, the expression passed to getType() as an argument is evaluated to obtain a value. getType() returns a string containing the name of the type of the value. So, getType($x) doesn't determine the variable's type, because there's no such thing. The expression $x is evaluated -- i.e., the value in $x is obtained -- and the value passed to getType().
- You claim there is middle-man named "value". It's not objectively observable though. Using most of your model even, we could say that every expression returns two things: a type (indicator), and a representation. We don't have to package those two things together.
- "Value" is most certainly objectively observable, by deduction: What does a variable contain? What does an expression evaluate to? What does a literal denote? What does a function return? And, yes, expressions do return two things -- a type and a representation. We certainly don't have to package them together, but they appear together so frequently that it's convenient to treat them as a unit, which conventional language semantics terms a "value".
- No, because "value" is overloaded. If we use it for one overloaded purpose in our model we cannot use it for another overloaded purpose. I've explained that already like 4 times, but you don't listen/remember. Why are you so forgetful? Plus, I don't want to have to complicate the primary structure used in the model to fit some notions of "value". The worth of the simplification in the model weighs greater than the goal of matching some colloquial usage of the term. I am rationally weighing the tradeoffs and giving the reasons for the weights. You ignore the counter arguments and just repeat your side of the story over and over with slight mutations each time.
- There is a repeated discussion pattern here, in which you claim there are no values and I point out that there are. Then you claim "value" is overloaded so it can't be used, at which point I note that it's a simple matter of clearly distinguishing values from literals. That usually turns into some quibble over whether or not literals denote values, at which point the debate starts over at whether there are values or not.
- [I will inform you that the whole confusion arises from a lack of understanding of "strings", which is to say quoted literals. But for that you have to take MarkJanssen's ComputerScienceVersionTwo. Proof: make a language that auto-quotes non-lexed characters found in source code, such that A=abc would turn abc into a [quoted] string literal.] --AnonymousDonor
- You will inform me? That sounds very helpful, but I'm afraid I don't understand your comment. "Auto-quotes non-lexed characters ..."??? What does that mean? What are "non-lexed characters"?
- When your language goes to translate your program it has to parse and lex your source text, putting them into meaningful language elements. I was suggesting inventing a language that takes strings of characters that aren't otherwise numbers or operators and automatically turns them into quoted strings. You'll see that there's a difference between "403" (as a quoted, lexically-defined element, contained on both sides by the ASCII character: \"/ and 403 without quotes -- UNLESS you, the language designer, explicitly define what it means. --AnonymousDonor
- You're proposing a language that turns strings into strings? Sorry, not following you. Can you give an example?
- You are inconsistent. Objectively observable "values" (overload #2) do no match your model, and your internal "values" (overload #1) are model-specific.
- How am I inconsistent? What are "objectively observable 'values' (overload #2)", and how do they not match my model? As for my "internal 'values'", they're quite consistent with use throughout ComputerScience. See, for example, http://en.wikipedia.org/wiki/Integer_(computer_science)
- You said output has values, and then appeared to backtrack. Excluding alleged clear-cut values in I/O, values are not an objective thing, except per model. And your wikipedia link describes an abstract model mostly for the head, lacking sufficient precision for our needs. Plus we are talking about "values", not "integers". Why bring integers into this?
- Where did I backtrack? Output does have values, denoted by literals, and they are most certainly objective. They are a fundamental component of mathematics, computer hardware, and programming languages. The "integer" type is a set of values and associated operations that is familiar to every programmer. Thus, it provides a quintessential example of values.
- PageAnchor literal_compare_2
- As a fuzzy human head notion, not as a rigorous concept, at least not in a way that average programmers can use. "Literals" are clearly defined by language grammars, such as via BackusNaurForm or syntax charts per language. Most "algol-style" dynamic languages have something called a "literal" defined that way (and usually fitting a general syntax pattern). It's clear enough that a digital machine can identify such parts. There is no equivalent for "value". Again again again, I don't want to model head notions because every head is different. I want to model something objective. "Value", as you (hazily) describe it, does not qualify so far. (My model(s) may use certain human notions internally, but I don't claim them to be external truths. They are model-specific truths.)
- As noted above, a feature's absence from language grammar does not mean it is absent from language semantics, and a grammar without semantics is insufficient to define a usable language. Values do not appear in language grammars, but neither does I/O in most popular imperative programming languages. Yet, I/O is indubitably there.
- Grammars are clearly defined, but your "semantics" is not, at least not in a way that average programmers can objectively test existence of. (I'm not dismissing the idea that a PhD can perhaps generate a mathematical proof of objective existence, but mortal programmers are not going to understand it or be able to use it.) And I don't know if your comparison to "output" is useful. Whether "output" is clear or not is moot if it's not subject to dispute in a given context. In other words, we agree on what "output" is (luckily), but that doesn't mean "output" is clear/unambiguous. Perhaps I should say, "things we agree on" instead of "objective". Ultimately, EverythingIsRelative, but that shouldn't stop us from creating useful mutual UsefulLies.
{It would fail. Typed or not, adding "Result: " to any number is difficult to achieve. I obviously don't understand what you are getting at.}
Assume the + operator is overloaded, and can represent either numeric addition or string concatenation depending on the types of its operands. This is found in Javascript, Java, C#, Python and many other languages.
{It would fail, overloaded or not. What is it you are trying to say?}
Why would it fail? Imagine that foo() returns 3 and bar() returns 7. So, let's examine how a real interpreter might process it:
a = b + foo() * (bar() + 3);
a = b + foo() * (7 + 3);
a = b + foo() * 10;
a = b + 3 * 10;
a = b + 30;
a = "Result: " + 30;
a = "Result: 30";
{You are right. It would work, but it does demonstrate the folly of overloading.}
Perhaps, but that's a language design decision. The concerns here are about language implementation and modelling.
We could model it like this, where v1 .. vn are values such that v=(r,t) where r is a representation (e.g., a sequence of binary digits) and t is a type reference.
v1 ← invoke('bar')
v2 ← convert_numeric_literal_to_value("3")
v3 ← add(v1, v2) // use numeric addition for +, because v1 and v2 are numeric (i.e., v1(t) = v2(t) = numeric).
v4 ← invoke('foo')
v5 ← multiply(v4, v3)
v6 ← retrieve_from_variable('b')
v7 ← convert_to_string(v5) // use concatenation for + because v6 is a string (i.e., v6(t) = string), but first convert v5 to a string
v8 ← concatenate(v6, v7)
assign_to_variable('a', v8)
Note that the only operations we need on variables are value ← retrieve_from_variable(name) and assign_to_variable(name, value). Variables are merely an association between a value and a name; we don't need variables to have a "type" property. Only values (v1 .. vn) need a "type" property v(t), which we use to decide (as shown in the above example) whether to use add(value1, value2) or concatenate(value1, value2).
In a real implementation, it's not uncommon for a stack to be used to hold values so the above might be implemented as:
push(invoke('bar'))
push(convert_numeric_literal_to_value("3"))
push(add(pop(), pop()))
push(invoke('foo'))
push(multiply(pop(), pop()))
push(convert_to_string(pop()))
push(retrieve_from_variable('b'))
push(concatenate(pop(), pop()))
assign_to_variable('a', pop())
If we want to be pedantic, we can note that add(), multiply() and concatenate() are almost certainly low-level operators that expect and produce representations rather than values (as defined here). So, in the interest of stricter accuracy we can add a value ← create_value(representation, type) operator and re-write one of the above examples as:
// Example 376
v1 ← invoke('bar')
v2 ← convert_numeric_literal_to_value("3")
r1 ← add(v1.r, v2.r) // use numeric addition for +, because v1 and v2 are numeric (i.e., v1.t = v2.t = numeric)
v3 ← create_value(r1, numeric)
v4 ← invoke('foo')
r2 ← multiply(v4.r, v3.r)
v5 ← create_value(r2, numeric)
v6 ← retrieve_from_variable('b')
v7 ← convert_to_string(v5) // use concatenation for +, because v6 is a string (i.e., v6.t = string), but first convert v5 to a string
r3 ← concatenate(v6.r, v7.r)
v8 ← create_value(r3, string)
assign_to_variable('a', v8)
You know, those look a lot like hidden variables to me. And "v6.t" suggests a flattened structure instead of your nested structure, which would be addressed more like "v6.value.t". -t
If they look a lot like "hidden variables" to you, then it seems you're using the term "hidden variable" to mean "value". That's fine, but nobody else is going to recognise your use of the term "hidden variable" unless you're careful to point out that you say "hidden variable" where most others say "value". And "v6.t" is correct, because v6 is a value so v6.t is the value's type. Keep in mind that the above is an illustration, rather than a copy of some implementation. The values that are numbered for illustration purposes above would not have names or numbers in any real implementation; they'd most likely (as is shown above) be pushed to and popped from a stack.
But that's for machine efficiency. A given model doesn't need to care about machine efficiency, especially if it's for humans to digest instead of (only) machines. And they look and act more like variables to me THERE than "values". Your own examples and descriptions of them are helping me demonstrate my point. Thank You. This is especially telling: "The values that are numbered for illustration purposes above would not have names or numbers in any real implementation."
Yes, it is telling: It clearly distinguishes the model of language semantics from an application of the model. The former does not require names to be associated with values. In the model (and in conventional semantics), variables and constants have names, do not have types, and contain values. Values do not have names, and they have types. This is the conventional "model" that we use to understand popular imperative DynamicallyTyped programming language semantics, and it is sufficient to create illustrations like that shown above. In illustrations (like the above) that use the model, we might wish to label the values under consideration, just as we might wish to number the individual lines of pseudo-code. Obviously, the line numbers are not part of an interpreter at run-time! Likewise, the named values are not components of the model, and they certainly aren't components of what we're modelling -- they are only notational conveniences that we might use to illustrate how the model works in a particular scenario. In another scenario, I might not label values. Hence, values do not possess a "label" or "name" property.
However, the main point of the illustration above is not whether or not elements of the illustration of the application of the model have names and types, but whether what is being modelled -- i.e., variables -- have both name and type properties. The above clearly shows that the only operations performed on programming language variables -- e.g., 'a' and 'b', above -- is to retrieve a value or store a value. Thus, there is never a need to obtain the "type of a variable", and so there is no need to model programming language variables has having a "type" property.
- I dispute your use of the word "have", as already described. If we ignore nesting depth, than vars in D1 languages DO need to "have" a type indicator of some sort. In colloquial land, nesting does not de-have-itize two parts, as the box/egg/ring example illustrated. You violate this with your awkward usage. I see no rational reason to use "have" the way you are using it.
- Actually, no, "vars in D1 languages" do not need to "have" a type indicator of some sort. That's precisely my point. In D1 languages, there are only two operations that are ever performed on variables. The first is to retrieve a variable's value given its name. The second is to store a value given a variable's name. There are no other operations on variables in D1 (or D2) languages.
- Further, the above does not show the specifics of assignment, or assignment-like operations. My model does show that down to the detail of where XML structure attributes are changed via accessors to the XML structures, as seen in [i forgot the topic name].
- Assignment is trivial -- a value is associated with a variable name so that a subsequent retrieval given the variable's name returns the value. The two operations on variables are shown above: retrieve_from_variable(varname) and assign_to_variable(varname, value). The former retrieves a variable's value given a variable name. The latter assigns a value to a variable given the variable name and the value.
- It's passing a structure of some kind, but doesn't define that structure nor illustrate the details of changing that structure. I got down to the attribute level.
- What structure is being passed?
- In your model? You should know that.
- The question was rhetorical. I expect you know the answer.
- In YOUR model, that structure is a dual-attributed structured called a "value" in your model.
- Not just MY model, though; it's what happens in every popular DynamicallyTyped imperative programming language.
- Bullhonk! "Values" are not clearly part of objectively observable features of programming languages from a programmer's perspective. If you don't like the truth, that's not my problem.
- Values are not part of the grammar of programming languages, but they are certainly part of the semantics of programming languages. For example, given a variable assignment like "a = _______;", what do you think -- from a programmer's perspective -- most programmers would say the _______ is?
- Estimation: 50% "expression", 20% "value", 30% other.
- Excellent. Now, if I might ask you to speculate a little further, what do you think the 50% who answer "expression" (which is entirely reasonable) would say is stored in variable 'a' after the assignment statement has executed?
- I don't know. I'd answer "a value and a type", at least for tagged (D1) languages. Many programmers would probably say "value" at least. If you asked them if a "type" is ALSO stored, roughly half would probably say "yes". If you asked them if the "type" is distinct from the "value", I would guess roughly half, but that's stretching my estimation abilities because I've rarely probed developers thoughts on such that deeply. Most don't ponder it that deeply either in my experience. They'll likely say, "do whatever's needed to make the thing work; I don't care what you call it." Most are get-it-done practitioners, they don't philosophize on such. -t
- If "many programmers would probably say 'value' at least", it sounds like whether or not values are "objectively observable features of programming languages" doesn't matter; a significant percentage of programmers already recognise values. If your model ignores values, or calls them something unfamiliar, isn't that going to cause confusion and contradict what many programmers already know?
- "At least" doesn't conflict with my model. Thus if a programmer answered just "value", it would not contradict my model because expressions do generate values in my model (and other things). My model does not ignore values, it just defines them partly different than you do.
- Good, we're making progress. This page started because you appeared to be unwilling to acknowledge values at all.
- Possible existence and mandatory existence are not the same thing. Variables and literals objectively exist per syntax rules of the given language. The steps to transform I into O (I/O) do NOT need something objectively called "value". Most models will probably have something that resembles the vague notion of "value", but it's not precise enough to narrow down to which is "proper".
Perhaps there is an alternative interpretation (pun intended) of popular DynamicallyTyped programming languages that has no values and relies on variables having a "type" property. If so, I'd like to see it using the same approach used above. That way, we could compare the approaches side-by-side. You've claimed, several times, that your model is simpler and more intuitive than the conventional model illustrated above. If that's the case, I'd like to see an illustration to back up your claim of simplicity.
The key data structure is simpler, for one, as normal people can see below in the "Original XML comparison example". Whether the rest is "simpler" or "more intuitive" may depend on WetWare. Again, you are not a representative specimen to test that in part due to your heavy exposure to a particular model. I find your writing and documenting style difficult and obtuse.
Yes, I recognise that you find my writing and documenting style difficult and obtuse. That's why I suggest you demonstrate the application of your model using pseudo-code, just as I've done. That way, we can compare models without getting lost in verbiage.
And I agree that a model may not "need" to label certain things to function (match I/O), but that doesn't mean we cannot do it. A model meant to optimize human consumption may not be optimized for machine consumption and vice verse. If labels improve the model for human grokking, I'll use them EVEN if they bloat up OTHER areas. This should go without stating and be obvious to rational people. Further, it's not a minimalism-only contest. Many factors need to be balanced. We don't "need" your model's values and nesting either. (There may also be trade-offs between speed and size.)
You appear to be conflating application of a model -- for which (e.g.) labelling is entirely appropriate -- with the model itself. As for not needing values -- as I've suggested, please demonstrate application of your model using pseudo-code, as I've done above, without reference to values.
- You mean a head-notion of a model versus an implementation? The head version is not very testable, unfortunately, because it lacks objective rigor. See TopsTagModelTwo for some draft pseudo code for an example expression.
- Your pseudo-code appears to be incomplete. I suggest you apply your value-less (!) model to my example, so that we can compare them side-by-side.
- So is yours. Mine appears closer to executable than yours. If there is a particular function or API you want fleshed out, just let me know.
- I'd say mine is a closer to being executable. I don't know what language you're using, and it appears there's still a lot to be written. I'm not sure why your variables have a "type" property, either -- it doesn't appear to be needed -- and there's weird stuff going on with quotes (around strings?) that I've never seen in a language runtime. Is that part of the lexxer?
- Quotes? Please elaborate, AT that topic. Your model has a "type" slot for D1 languages also; I just consolidated your nested structure into a flat structure because the nesting (value inside of var) serves no purpose of the model itself.
- I don't see what purpose there is in giving variables a "type" property, nor do I see the purpose in dealing with quotes. (I'm happy to continue this threadlet on the appropriate page -- feel free to move it there.)
- Moved.
- My illustration is about thirty seconds of editing away from being valid Javascript. With at most an hour or two of effort to flesh out the operators like retrieve_from_variable(varname) and assign_to_variable(varname, value), I'd have a simple, but runnable, language runtime engine.
Further, you admitted that we don't HAVE TO package the type indicator and the representation together. You claim the packaging has nice design features, but putting that aside for the moment, your admission means that a proper model (matching I/O) CAN exist without your two-attribute "value" thingy. True?
In general, no. Unless the type attribute is implicit (e.g., all representations are character strings, so an explicit type attribute is unnecessary), it's impossible to perform operations on values without the interpreter having access to both a value's representation and its type. Without a representation, what do operations use as operands? Without a type, how does the system know what operation to perform? That said, when the interpreter knows the type of a value, sometimes the representation is unbundled from it in order to perform a low-level operation, but the result is then bundled back into a value.
You seem to be presuming that operations "happen on values" and that values exist. It's circular logic. That's your personal preference, not a universal truth. We can model operators as acting on the type indicator and on the representation without pre-packaging them.
Conventional popular imperative programming language semantics define operators as operating on values, so it seems rather odd to arbitrarily deviate from conventional understanding of language behaviour. Furthermore, type references and representations will almost invariably be used together, so it seems reasonable to treat them as a unit. However, yes, you can act on the "type indicator" and the representation independently -- and there are implementation reasons, as I noted above, for sometimes doing so -- but from a modelling point of view, what would be gained by doing so?
I agree it would be "reasonable" to group them (at least, possibly with other attributes). But above was a hypothetical proposal to tease out possibilities, not a design choice in terms of software engineering quality. It's a for-the-sake-of-argument scenario to build on something you agreed on rather than start from scratch. As far as your alleged "standard semantics", it either does not exist or is one of many overloaded vocabulary kits in colloquial land. You put too much stock in such an alleged verbal "system". You seemed to agree elsewhere that "value" was overloaded in colloquial speech, or at least sloppily used for multiple purposes. You seem to be cherry picking which usage instance is deemed "official" versus casual (sloppy).
Conventional semantics are found in every first-year programming and/or ComputerScience textbook, and the majority of language reference manuals for popular imperative programming languages.
Such materials are not careful with vocabulary. Or at least one cannot tell the difference between terms meant to be precise/clear/rigorous and those meant colloquially. You have agreed the online documentation for Php is poor, and it's one of the top dynamic languages (for good or bad). How does one know if they are looking at good documentation instead of Php-like documentation unless there are clear ways to test on one's own?
Whether they're well-written or not -- which is a separate issue entirely -- the semantics they describe (once you allow for terminological blunders) are consistent. Popular imperative programming languages all have the same semantics within their given category, i.e., Category S, Category D1 and Category D2, as described on TypeSystemCategoriesInImperativeLanguages. That's precisely what enables programmers to easily move from one popular imperative programming language to another, with only some effort to learn differing syntax and library routines.
I disagree with this assessment, as stated elsewhere on this page. One can use types "good enough" with just fuzzy notions and pattern-based experience.
When developers move from C# to Java, or Java to C#, do you think they find it difficult?
Those are generally considered "static typed" languages. Static typing is generally clearer than dynamic with regard to types because of the explicitness required and (mostly) lack of run-time type mutability. And people can move around dynamic languages fairly easily also; I've done it myself without knowing the typing details of each specific language. There are various rules of thumb and tricks to work around type-related issues in dynamic langs that don't require a rigid/clear model of their type system.
Why do you think people can "move around dynamic languages fairly easily"?
I know I've answered this at least 5 times already. To hell with #6! Your forgetfulness is becoming concerning.
Really? People can "move around dynamic languages fairly easily" for an obvious reason: Popular imperative DynamicallyTyped languages share common semantics. If they didn't, "a = foo() + b;" might mean "invoke operator foo and add its return value to the value in variable 'b', and assign the resulting value to variable 'a'" in one language, whilst in another language it might mean "categorise tuple generator 'foo' according to the relationship defined by 'b', using Shlaer's Algorithm to rotate the '+' spindle (partial, it's degree-n only!) against the convolution (secondary, but beware of functional tropisms to isomorphism) of partition 'a'." Fortunately, similar syntax across all popular imperative programming languages implies almost identical semantics. Variations are minimal, and fall into the three categories described at the top of TypeSystemCategoriesInImperativeLanguages.
Some may indeed use the second Shlaer model. People use all kinds of different head models that are GoodEnough to finish their tasks. That's one of the GREAT LIFE LESSONS I have learned the hard way. I've talked before about suspected differences between "linguistic" thinkers versus "visual" thinkers. Similar syntax only means similar syntax, NOT similar syntax processing. Syntax is a poor reference for head models other than the starting input. I bet monkeys could recognize syntax if coaxed with food for long enough. -t
The "Shlaer model" is fictional, an illustration of hypothetical alien semantics for familiar syntax. It is only because language semantics are consistent across languages that it is possible to recognise that similar syntax -- such as expressions, variable assignment, operator invocation, etc. -- reflects almost identical semantics. This has nothing to do with "all kinds of different head models". There is clearly a consistent and conventional "head model", or it would not be possible to "move around dynamic languages fairly easily."
Bull. You have not proven that your version of "semantics" is the only possible model to transform input (including source) into output. And like I've said elsewhere, many start using D2 languages such as CF without ever knowing it's a D2 language or that D2 languages even exist. The differences are relatively subtle and can be worked around on a case-by-case basis without overhauling one's head model (change D1 to D2 in head). And many adopt defensive programming so that they don't have to worry about "drifting" types. If something goes awry, they often shore up their defense, not their head model. I often do that with new languages, but over time I start to probe type behavior to avoid defense bloat. -t
Why have you switched to arguing that I have not proven that "my" version of semantics is the only possible model? It obviously isn't, as proven by the existence of FunctionalProgramming and LogicProgramming. What does what programmers don't know about Category D2 languages have to do with this? The discussion here was about why people can "move around dynamic languages fairly easily". The obvious answer is that it's because dynamic languages share a common semantics associated with similar syntax. The differences are precisely what are documented at the top of TypeSystemCategoriesInImperativeLanguages.
You don't need a common head model to match external behavior. TuringEquivalency. Using an AI illustration, one may match behavior using neural nets, and another using genetic algorithms. Both "work" for prediction, despite having very little in common. (The human mind probably uses multiple AI-category-like techniques to triangulate answers, but diff people probably lean on some techniques far more than others.)
Regardless how individuals may construct mental models, the semantics they are modelling are consistent across popular imperative programming languages. That allows people to "move around dynamic languages fairly easily". If language semantics varied significantly from language to language, that wouldn't be possible, independent of what mental models people happen to construct.
Your "semantics" is not measurable in any objective way, and so I cannot verify your claim. You don't need a common head model to match external behavior, period!
Measurable or not, consistent semantics exist. The fact that programmers can move from Python to Ruby, for example, without re-learning from scratch, is ample evidence.
That's weak evidence to me and I've given examples from AI to demonstrate that prediction matching is not equivalent to process matching. Repeating it over again repeatedly does not make it strong evidence. But I'm tired of re-debating this "switch" thing. LetTheReaderDecide if it's strong evidence. Cap it off.
That does nothing to contradict my point. I don't know what your "AI illustration" is intended to show. What does the trivial recognition that "a = foo() + 2" in Python has the same semantics as "a = foo() + 2" in Ruby have to do with neural networks or genetic algorithms?
Yes it does contradict. Your statement implies that same results implies same process, which I have demonstrated to not be a universal truth in a general sense. And you have not measured "semantics" to demonstrate that identical semantics are being used for different expression processors (humans or machine). If both machine x and machine y take inputs a and b and produce the same output c, that is not definitive evidence that machine x and y are the same or use the same "semantics" (cough).
And like I keep pointing out, one can use the wrong model and still be GoodEnough to get work done. I used D1 assumptions on CF (D2) at first and for quite a while and never noticed the difference, spot-fixing any problem areas (typically by adding explicit conversion, explicit validation, or fixing the source data of input parameters, such as forms, to make sure they come in as expected.)
How does this answer my question? Again, what does the trivial recognition that "a = foo() + 2" in Python has the same semantics as "a = foo() + 2" in Ruby -- and, indeed, the same semantics in every popular imperative programming language where "a = foo() + 2;" is a syntactically-valid statement (which is most of them) -- have to do with neural networks, genetic algorithms, or (from your latest response) the "same process"?
In input->processX->output, if input and output matches for different instances of processX, that is not solid proof that the instances of processX are the same. Different models (both head and machine) can match OTHER models in terms of predicting output sets based on inputs sets. And until you scientifically isolate and measure this "language semantics" of yours, it's pointless to keep talking about it. You might as well be talking about unicorns and bigfoot.
As I've pointed out above and elsewhere, languages cannot be implemented in real interpreters and compilers without language semantics. Yet again, you seem to be ignoring (or evading?) the indubitable fact that "a = foo() + 2" in Python has the same semantics as "a = foo() + 2" in Ruby -- and, indeed, the same semantics in every popular imperative programming language where "a = foo() + 2;" is a syntactically-valid statement.
I'm skeptical. Can you prove mathematically or scientifically this is the case? I will tentatively agree that SOME kind of semantics are needed to implement languages, but I don't agree they have to be the same per implementation. Maybe some degree of similarity is necessary, but I don't know whether it's 5%, 50%, 90%, etc. Without a way to isolate and scientifically study such "semantics", it will be very difficult to do. Syntax and I/O is something we can objectively analyze. The "in between" processing is not (barring making implementation define a language, which I don't). Thus, please be careful about the distinction between 1) the existence of semantics being necessary and 2) semantics being "the same" across implementations/models/languages/heads.
(Note that the I/O characteristics of an implementation defining a programming language is different from the actual implementation defining the language.)
Can I mathematically prove it's the case? No, that would be like mathematically proving "there is integral calculus", which isn't how it works. Can I scientifically prove it's the case? Sure, and you can too. Read the documentation for every popular imperative programming language you can find. In how many of them is there a close syntactic equivalent to "a = foo() + 2 + c;" that means "invoke function foo() and add the resulting value to the value represented by the literal "2" and the value in variable c, storing the result in variable 'a'"?
Like I've said before, one could also just say "results" instead of "resulting value". Further, I've rarely heard it described like you worded it. That's a projection of your mind's model, not necessarily a canonical view. I'm not saying nobody shares that view, but that's it not the only possibility and not the only way others would likely describe it. One could just as well say, "take the result of foo, add the number 2 to it, and then add that result to what's in 'c'."
Yes, you could say "take the result of foo, add the number 2 to it, and then add that result to what's in 'c'", and that would be an appropriate shorthand. Obviously, "result" is just shorthand for "resulting value". The semantics are still the same, just less detailed. You could also say, "the expression is evaluated and the result stored in 'a'" or "the expression's result is stored in a variable." These are all semantically equivalent, differing only in detail.
- How are you measuring "obviously"? It's "obvious" to you because you are mentally biased toward a particular model. Claiming something is "obvious" does not make it so. And how are you measuring that those others are "semantically equivalent"?
- What else would a "result" be, in this context, except a value? If it's not a value, the only alternative is that "result" is an action like turning on a light or activating a solenoid.
- Huh? Who cares what label it's given. I've seen "result" used many times (IIRC) and nobody seemed to complain or stop buying such books. You obsess on labels. Perhaps that's what linguistic thinkers do. Us visual thinkers generally don't care as long as we know what's being talked about. That's fine if you really need to settle such because such labels are important to you, but probably no model will make everybody happy: different model styles for different WetWare.
- Are you saying "result" is a synonym for "value"?
- No, I'm not defining value. I'm only saying "result" works in this particular scenario and am not making any claims wider than that.
- So, could the "result" of a function be that it turns on the lights or activates a solenoid?
- In a certain model, perhaps indeed. One can build a Rube Goldberg-like model and it could in theory pass the I/O-match test.
And "how it works" has no objectively canonical form. Maybe there are multiple paths to how integral calculus "works".
Why assume there is One True Path without objective evidence? And what exactly does "works" mean anyhow? We use human-world words to describe virtual or abstract worlds, but it often is a just a poor fit. Reading other references you give reinforces this.
"Works" simply refers to what the language does with the machine, which is described in every programming language reference manual, and which is known precisely by the author of every language implementation, and by everyone who reads and comprehends its source code.
So you claim. And you agreed that Php's online documentation was poor, at least in areas related to types. Thus, at least for SOME languages, one is left to their own devices and experiments to model type-related behavior. The rest are vague also in my experience. Nobody has figured out how to do it well, so the authors all copy prior garbage's traditions, keeping the garbage and fuzz alive.
Replies continued at ValueExistenceProofTwo, PageAnchor manuals_472
Original XML comparison example.
State 1: Sample sample blah blah blah
.
. <var name="foo" type="number" value="123">
.
State 2: Sample sample blah blah blah
.
. <var name="foo" type="string" value="123">
.
State 3: Sample sample blah blah blah
.
. <var name="foo" type="string" value="moof">
.
Versus:
State 1: Sample sample blah blah blah
.
. <var name="foo">
. <value type="number" representation="123">
. </var>
.
State 2: Sample sample blah blah blah
.
. <var name="foo">
. <value type="string" representation="123">
. </var>
.
State 3: Sample sample blah blah blah
.
. <var name="foo">
. <value type="string" representation="moof">
. </var>
.
{A value is the data. A value makes an individual attribute the attribute that it is. A value is the attribute.
The value's type is a small part of the metadata which may accompany the value. The type is one of the things about the attribute.
Is the "tag" meant to represent more than "type"?}
I've never seen evidence that "tag" represents anything other than "type", or more specifically "type reference" (as opposed to "type definition"). What's an "individual attribute"?
{I have never seen evidence that "tag" represents anything. I am still wondering what Top means by tag. That is why I keep throwing in my little statements - hoping for an "ah-aah" moment when I think "Now I know what a tag is - we can define it in a page and get rid of all this thread-mode discussion"}
(The context appears to have been messed up or lost here. I don't know what this is about.)
It appears to have originated as a self-contained comment.
See Also: ValueExistenceProofTwo
CategoryMetaDiscussion
NovemberThirteen