Value Existence Proof Two

Continued from ValueExistenceProof, which is getting TooBigToEdit

Re: "contradicts typical understanding of dynamically-typed languages" -- So you claim. -t

{It's true. Why are DynamicallyTyped languages sometimes described as "value typed"?}

I've rarely heard that term. There are a lot of pet terms related to types and values floating around out there. It's a poorly policed vocabulary district. -t

{No one "polices" vocabulary; it is organic. If usage appears, like "value typed", it's probably a reflection of typical understanding.}

By that measure, if it's uncommon, then it reflects uncommon understanding. You fell on your own sword here.

{Not at all. I offer "value typed" only as an additional point of evidence.

You have not associated it with high frequency of usage. (See PageAnchor Ratio_01.) If it's low frequency, then it's not of concern to typical developers. I've also given point evidence, such as it being used interchangeably with "literal".

{I never claimed "high frequency", nor would I. How high is "high"? How low is "low"? I'm fully aware that "value" is used as a synonym for "literal", but as I've explained elsewhere, that's entirely reasonable if not entirely precise. Literals always denote values.}

In the head.

{Yes, and in programming language semantics and the interpreters and compilers that implement them.}

But it's not necessarily one-to-one. Bob can turn his head model A into implementation X. Fred can study X, and create an ALTERNATIVE head model B that mirrors implementation X. Both A and B can model the behavior (I/O) of device X.

{I've no doubt different semantics are possible for the same syntax, but how do you know models A and B are correct? For example, "a = foo() + 4" can mean "invoke function foo and add its return value to four, and store the result in variable 'a'", but it can also mean "store the expression 'foo() + 4' in variable 'a'". Will they both produce the same results if we assume writeln(p) always evaluates any expression it is passed, including an expression stored in variable p?}

Experience and testing. And in many cases people's head models are probably less than 100% perfect.

{"Experience and testing" are far less reliable than logical deduction.}

We've had this argument already in ValueExistenceProof; I'm not going to repeat it here.

{Fair enough. We'll deal with it there.}


{Why do DynamicallyTyped languages lack type declarations on variables?}

Because if a literal contains type info, then it's usually redundant. Dynamic languages tend to value brevity compared to static ones, which value explicitness.

{I assume you mean "variable", instead of "literal"? There was no mention of literals. DynamicallyTyped languages lack type declarations on variables because DynamicallyTyped languages can be assigned any value of any type at any time. Therefore, there is no explicit need to associate a manifest type with a variable. As it turns out, there's no need to associate a type with a variable -- in a DynamicallyTyped language -- at all.}

I'm mirroring existing languages as they are, and their behavior usually does reflect a "type indicator" of some kind. I agree the indicator is a dumb design (except possibly for efficiency), but it's the current style for good or bad. Example of effects in a JavaScript-like language:

   a = 3; write(typeName(a)); write(a + a); // result: Number 6
   b = "3"; write(typeName(b)); write(b + b); // result: String 33
Here above, "b" has something extra or different than "a".

{Actually, no. Scalar variables in popular DynamicallyTyped languages are undifferentiated. Variable "a" differs from variable "b" only by the values they contain. More accurately, it's the 3 has "something extra or different than" "3". In DynamicallyTyped languages, values have type references. In Category D1 languages (see TypeSystemCategoriesInImperativeLanguages) the type references are explicit.}

{When a variable is passed as an argument to an operator invocation, the operator only "sees" the value in the variable, not the variable itself. Outside of assignment, whenever a variable name is used, the interpreter retrieves the value the variable contains. The value is then passed to the appropriate operator as an argument. It's the same process whether typeName()'s argument is a variable, a literal, or a function call. In every case, the interpreter simply evaluates the expression to obtain a value; it doesn't matter whether the expression consists of a variable, a literal, a function call, or a combination of these. Otherwise, there'd have to be one typeName() implementation for variables, one typeName() implementation for literals, one typeName() implementation for function calls, and one typeName() implementation for expressions consisting of a combination of operators, literals, and variable references. Obviously, that would be unreasonable, and it's unnecessary.}

    // Example loopy-72 -- Comparing I/O profiles
    for d = each possible data input file/stream
      for s = each possible source code input file
        x1 = outputFromRun(modelFoo(d, s))  // the target test model
        x2 = outputFromRun(actualLanguage(d, s))
        if (x1 != x2) then 
          write("Model Foo failed to match I/O profile of actual language.")
          stop
        end if
      end for s
    end for d
    write("The I/O Profile of model Foo matches actual language.")
{The interpreter simply evaluates the argument expression and passes the resulting value to typeName(). Then typeName() retrieves the name of the type associated with the value, and returns the name of that type as a string value.}

You have not proven that the "type" should be "inside" the "value". It's only that way in your particular model. You are describing your model, not a universal truth of all working models. I don't dispute that YOUR model does that, so there is no point in describing how you model does it over and over again repeatedly.

{"Type" doesn't have to be "inside" the value. It only has to be associated with the value, i.e., a value has a mapping to a type. In a model, a simple and intuitive way to demonstrate an association is by inclusion (i.e, "has-a"), but it's by no means the only way for implementations to work.}

I realize that references and inclusion can be interchangeable implementation choices. But for clarity, it's best a model pick one or the other and stick with it to avoid confusing the poor reader. I'm using your reference XML samples here, and they are inclusion-based.

{Yes, for modelling purposes inclusion is clear and intuitive.}

I'm assuming inclusion when interpreting your words such as "have" based on your XML. If a variable "has" a value, and this value "has" a type, then it's common to say the variable "has" a type. It should be dirt simple, why is this derailing on the Communications Express?

{There's nothing in a DynamicallyTyped interpreter that will ever make reference to a "variable's type". In DynamicallyTyped languages, only values' types are ever referenced. Therefore, in a DynamicallyTyped language, why model a variable as having an explicit "type" property?}

You keep saying that, but you don't demonstrate it. Repetition will NOT convince me of diddly squat. You cannot even demonstrate that "values" objectively exist dynamic language interpreters.

{I demonstrated it at the bottom of ValueExistenceProof, using code. As for "values" objectively existing, I've deductively proven they exist multiple times. As further evidence, try defining a model of imperative programming languages without values. (Obviously, that requires exclusion of values renamed to something else like "anonymous variables", etc.)}

They are "renamed" because you don't like the name? That's not scientific, that's human whim.

{No, it's because values aren't variables. They aren't variable.}

"Value" has no clear-cut definition such that we cannot verify that claim.

{The most general definition is that a value is the result of evaluating an expression. A variable is never the result of evaluating an expression.}

"The" result or "a" result?

{The result. The result of completely evaluating an expression is a value, always.}

In my model, a "value" is a result of evaluating an expression, but not the only result (because a tag indicator also results).

{Always?}

Further, if we don't need part X in a given model, why include it? We're not obligated to model every computer notion anybody ever had.

{"If we don't need part X in a given model, why include it?" is my argument for not giving variables a "type" property in DynamicallyTyped languages. However, values are needed. Otherwise, what (for example) does a function return?}

That's an internal issue, not an objective issue. Functions can return pligznoogs inside the model. In my model, the result of a function et al is two-part: the type tag and the "value" (aka "representation").

{I don't know anyone who'd claim that functions return pligznoogs either inside or outside of a model. However, if your functions return a type and an associated representation, they're returning values. Good.}

If it makes you happier to call them "values", go for it. But, it doesn't serve (help) my target needs.

{It makes me happier if you call them "values" as opposed to some made-up terminology. Actually, that's not quite true: You can call them whatever you like, including made-up terminology, as long as I don't have to see it. If I see your misuse of terminology here or elsewhere, I will object.}

And I'll object to the ugly trade-offs going your route entails. Don't ask what they are again. I've explained 10 times I would estimate. 11 won't change anything.


{Why do my examples -- at the bottom of the page -- not need to make any reference to the type of a variable?}

I don't know how your model works exactly at that level. I cannot comment on that. Specifically, your implementation of "add" is not fleshed out. In your model, eventually you'll have to make a reference such as: variable --> value --> type. It's just a longer way to do what I do, not having the middle-man layer.

{"Add", in most languages, is binary addition, acting on the representation of a value. In most languages, assuming canonical types, it is implemented by a CPU's machine language "add" instruction. "Add" does not operate on variables, it operates on values. In my model, there is never a "variable --> value --> type" operation. There isn't one in any DynamicallyTyped language. There is only a "variable --> value" operation and a "value --> variable" operation.}

You have not objectively demonstrated your claim about what "Add" does.

{I'm not sure what "objectively demonstrated" means in this case. I do demonstrate how interpreters invoke "Add" at the bottom of ValueExistenceProof.}

As far as the second part, in your model reference, you have the following structure, and it has "type" as a component of "value".

 <variable name="splat">
<value type="int" representation="3423"/>
 </variable>
{Sure. In Category D1 languages, values have a type reference property but variables do not. Variables simply contain values.}

Nested-ness does NOT de-have-ify in typical usage! How many damned times do I have to repeat this??? In YOUR model, variables contain values and values contain types. Thus, it's fair to say that "variables contain types" using typical usage of "contains" in IT. Nesting does not diffuse such a relationship.

{Nothing in the implementation of a DynamicallyTyped language ever makes reference to a "variable's type", so there's no reason to model it. Therefore, your model of variables having a "type" attribute is at best redundant, on average misleading, and at worst incorrect.}

It appears to serve the same general function as the "type=" attribute in your XML model for D1. And it is needed because type info is NOT kept "in" your "representation".

{Nor does it need to be. "Type info" is kept in values. A value is a pair consisting of a representation and a type reference.}

That's specific to YOUR model, not a universal need in order to match I/O (input includes source code).

{I'd be curious to see how you can match I/O without it, given that every language implementation requires it. In short, how do you (or the interpreter) know what a representation -- i.e., a string of bits, usually -- is, without an associated type reference? For example, what does this string of bits represent: 1 1 0 1 1 0? Should we use string operators on it? Integer operators? Floating point operators?}

{Now, what if I tell you it's an ASCII character or 'char' type? Or an integer? Or a monochrome image? It's only when I tell you the associated type that you (and the interpreter) can recognise that it's ASCII character '6', or the integer 53, or a tiny picture with six pixels. Without an associated type reference, it can be anything.}

Keep the type tag in the variable, that's how it can know.

{What variable? How does that work in a variable-less statement like writeln(3 + 4)? If you claim that 3 and 4 are "variables", then all you're doing is renaming "value" to "variable". (Erroneously, I might add, because 3 and 4 aren't variable.)}

The parser determines the type, and stores it with the variable. In my model, it uses two hidden variables, which are then processed to assign to the programmer-visible variable.

{Again, what variable? There's no "programmer-visible" variable, and "hidden variables" can't be variables, because they're not variable.}

They are internally defined in the model that way. If you don't like it, then think of them as snazzjoomps.

{Then you're using PrivateLanguage for values. Regardless what you call them, you surely must recognise that whatever-it-is-called has some representation -- usually bits -- but they mean nothing (to humans or machines) without an associated type, as explained above. Yes?}

So what if I use PrivateLanguage parts inside the model. That's legal. You use "representation", which is also a PrivateLanguage.

Actually, "representation" is not a PrivateLanguage. It's a word. But, assuming you meant that "representation" was part of a PrivateLanguage, then you are simply wrong. He's using the standard computer science definition, which makes his use of it part of a PublicLanguage.

These "standards" of yours are figments of your self-centric imagination. I'm tired of this bullshit! Stop with the goddam phoney standards already! I'm tired of these games; it's raw stupidity.

I bet next you claim the "standards" are clear if one reads enough obscure Phd books from mumbly professors in expensive universities. Even IF true, the problem with that logic is the target audience is for the most part not exposed to those sources such that it means nothing to them. Thus, they are not "common" standards, but rather WalledGarden standards (if your book claim were true).

{That's a rant, bordering on AdHominem attack, rather than discussion, debate, rhetoric, logic, or evidence. It deviates from the key question, which is this: Regardless what you call "values", you surely must recognise that whatever-they're-called has some representation -- usually machine bits -- but they mean nothing to humans or machines without an associated type, as explained above. Yes?}

Since I can go to just about any website that specializes in computer science and find that it uses "representation" in exactly the same way it's used here, your vulgar denial of it being part of a PublicLanguage is clearly false. Since your target audience also has free access (and in most cases had formal training that includes this use of the word "representation"), your claims that it would mean nothing to them is likely to be false as well.

No they don't. Representation usually relates to output, not INTERNAL "storage"; or at least it's overloaded to refer to both internal representation and output's formatting.

{"Representation" can refer to output, storage, input, transfer, display, etc. That's not "overloading" in the sense that one word has different meanings. It's one word, with one meaning, that can be used in a variety of contexts.}

Almost everything in coding-land is "representation" such that it doesn't distinguish anything and/or is redundant.

{It's distinguished by context. If I say my numeric values are represented using the IEEE 754 standard for binary floating-point arithmetic, then we know exactly what is meant.}

Because the standard defines such within or in relationship to the standard. But that by itself doesn't make it a universal truth; it may just be a standard-specific (part-specific) truth similar to the way each programming language gets to define what a "literal" is per its language syntax charts/rules.

{That doesn't contradict what I wrote.}

At best you have a UselessTruth. It doesn't convey anything definitive to the model user. No context of the type you speak of is given in the model itself.

{I'm not clear what you're debating, here. Every value has a representation. That is indubitable.}

EVERY attribute in your XML (or tuple or any XML) is a "representation" of something. Calling one in particular "representation" doesn't tell us anything new. "Value" would be the most natural name for it, but you already used that name for something else.

{As I noted above, the context is important. In context, the "representation" attribute is the bit string (or whatever) from which, along with a type reference, a value is constructed.}

Well, the "context" doesn't do anything for me. Sure, you can describe what the attribute is for elsewhere, but you could do the same if you called it a nazzblerf.

{That wouldn't make sense. Without a representation, when an operator manipulates a value, what is it manipulating? Is it operating on bits? Individual ASCII characters? Arrays of Unicode characters? Something else?}

I don't know. I won't attempt to re-label your particular model.

{My question has nothing to do with labelling or "my particular model". My question applies equally to every programming language. Of course, as I've mentioned before, a model can gloss over the representation by skipping any mention of it. For example, you can describe a value as <value type="int">1234</value>, but every interpreter must handle 1234 via some representation, whether it's a string of ASCII characters, a string of bits, or something else.}

It may be true that "1234" will be processed somehow somewhere in the various models, but that doesn't mean we should model and/or label such processing/handling in the same way for each model. Consistency would be nice, but sometimes different models have different purposes and are thus built different.

{There's nothing that suggests "processing/handling in the same way for each model". However, every interpreter must represent values somehow. A value without a representation is meaningless; it has no data for operators to manipulate.}

That PREsumes it needs your version of "values" to begin with. A model can use sniggs and klorps for the purpose of what you call "value" if it wants. Your version of value is arbitrary (or at least weakly guided by colloquial notions.)

{My version of value is not arbitrary. It's a concept fundamental to how modern computers and computer languages operate, whether you call it a "value" or a snigg or a klorp. Of course, if you call it a snigg or a klorp, no one will know what you're talking about.}

By "operate" do you mean actual implementation? If the answer is "yes", then you should know my likely reply by now. If you say "no" and use the word "semantics", likewise. Note that my model has something called "value", but it's not identical to your version of "value". I could instead call that "representation", but then "no one will know what you're talking about".

{Except that I define my use of "representation" at the top of TypeSystemCategoriesInImperativeLanguages, so that it's clear what I mean, and I provide examples to illustrate what I mean.}

But that's do-able for any "private" term/usage. It appears to be a double-standard to me.

{It would be a double-standard if I were advocating a completely wayward term -- say, "intubation" -- instead of "representation", and didn't define it but used it as if it was already familiar. Instead, I've defined the term and I am using it in the same manner that "representation" is used in numerous texts. If you find it particularly objectionable, an alternative for "representation" -- specifically where I've used it -- is "data".}

How is the level of wayward-ness or resulting confusion measured? See also PageAnchor tradeoff_rant_48.

{By "wayward", I mean it's largely unrelated to the concept to which it refers. Would the alternative, "data", be more acceptable to you than "representation"?}

No. I don't know of a better term at this point. "Value" would be the best term in my opinion, but you used it for a different purpose, and thus are stuck with a goofy trade-off. It happens. I doubt that ANY model can both work (predict) AND match well to existing colloquial usage.

{It's hardly goofy. "Representation" or "data" is conceptually and practically necessary, regardless how it's named.}

Fine, you satisfied the necessity part, but it's still a goofy name. Just about everything in all our models is "data" and "representation" such that it's superfluous.

{In the context in which it used, "representation" is the usual term, though "data" is also used.}

So you claim.

{It's consistent with convention. See, for example (I chose this at random), http://www.ntu.edu.sg/home/ehchua/programming/java/DataRepresentation.html }

Because it's an overloaded term. It's hard to be "wrong" with an overloaded term (unless one claims it's not overloaded, perhaps.) And the type indicator also has a "representation" in RAM I would note (based on at least one of the overloaded versions). So why are you NOT calling that a "representation" or a "type representation"? It's not any less "representationy" than the other attribute.

[It's easier to be wrong with an overloaded term. If there are no overloads, then you know which one to use since it's the only one available. Anyway, why do you think that we don't consider a type representation a representation?]

How is that easier to be wrong? Perhaps you mean easier to be ambiguous. If you "considered" it, it's not showing up in your model (per attribute labels etc.)

[Did you not read what you responded to? The explanation is part of it. And what makes you think it didn't show up in the model? In the case in question, the type would be one of the values which would consist of the type "type" and a representation of said type.]

Yes, Sir, I did read it. It didn't address my question the way I interpreted it. And if the explanation/documentation is sufficient, then why bitch about my attribute labels? The explanation can be used to "take care of it" there too. You guys have a double standard.

[I didn't respond to a question. I responded to your claim, "It's hard to be "wrong" with an overloaded term (unless one claims it's not overloaded, perhaps.)". I don't recall anyone (except you) saying your model couldn't "take care of it". The reason we've complained about the attribute labels is that it appears to make using your model more complex while obfuscating the actual semantics.]

The non-nested structure is blatantly simpler to me; slam dunk; but I will agree that some people have some really odd and unpredictable WetWare, at least relative to mine such that "simple" is largely subjective. Objectively, it's less characters, if you care. And "actual semantics" has yet to be scientifically isolated such that verifying your "obfuscating" claim is not possible at this time, as already discussed. As far as the overloading issue, perhaps an example would help illustrate what you are trying to convey.


{Why is it sufficient, in a DynamicallyTyped language, to both model and implement variables using precisely two operators: "void assign(variable_name, value)" and "value retrieve(variable_name)"?}

That's specific to your model, not necessarily a universal truth. Your model can name the internal parts whatever the hell it wants to.

{This has nothing to do with names, and everything to do with the semantics of popular imperative DynamicallyTyped programming languages. In all of them, variables are implemented using some equivalent to the two operators above.}

You have not given clear evidence for that claim.

{I encourage you to examine the examples at the bottom of ValueExistenceProof, which demonstrate that variables require no additional operators.}

Sorry, I don't see it. Your notation is not clearly defined as given.

{It's no more a special notation than you've used on TopsTagModelTwo. It's simply pseudo-code, using a Javascript-like syntax and arrows to indicate return values, as is commonly done in descriptions of algorithms.}

I'm only passing around scalars, not structures. You seem to be passing (poorly defined) structures, which hides details.

{The two structures -- value and variable -- are defined at the top of TypeSystemCategoriesInImperativeLanguages.}

It's not fleshed out enough to clearly know what it's doing with the structures.

{Exactly what's being done with the structures is described at the top of TypeSystemCategoriesInImperativeLanguages.}

That's not "exactly", that's fuzzy. I already dissected individual words from there one by one and demonstrated multiple valid interpretations. You conveniently forgot?

{No, but your "dissection" was trivially refuted at the time.}

That's not the way I remember it ending. I remember you admitting that English is inherently vague, and that's why other notations are needed and why you added more sections to your model writeup. And you haven't proven that your model is the canonical model of most human programmer heads. I don't dispute it may "run", I dispute it's THE proper model.

{My additional sections used a semi-formal notation to eliminate any possible misunderstanding of the English descriptions at the top of the page. Thus, they trivially refuted your "dissection".}

It's not clear to me, and appears to be an arbitrary model instead of a necessary model.

{I'm not sure what you mean by "an arbitrary model instead of a necessary model". As for being unclear, I can only suggest reading it again, and if you find something unclear please ask and I'll explain, clarify, or improve it.}

Asking you for clarification is like asking Ted Nugent for manner lessons.

{What do you mean by that?}

Somewhere I described the usual pattern of what happens when I ask you for clarification. I'll see if I can re-find it.

{Yes, please do.}

FractalVagueness is similar to it.

{FractalVagueness is fiction, and so not particularly helpful. If you could show what happens when you actually ask me for clarification, that would be more helpful.}


{Why does language documentation typically contain phrases like, "[this language] is a dynamically typed language. There are no type definitions in the language; each value carries its own type. ... Variables have no predefined types; any variable may contain values of any type." (From http://www.lua.org/pil/2.html)}

Typically? Why did you have to use obscure Lau to get an example instead of say Php or JavaScript or Python? I can't tell if that's common from one sample. And like I mentioned elsewhere, "literal" and "value" are often used interchangably. Literals *do* carry type-related info. And maybe Lau was implemented using something close to your model, and the writers reflected that. But that doesn't make it the only valid prediction model.

{Lua isn't obscure; it's commonly used for scripting, particularly in videogames. For references related to languages, I've been choosing them entirely at random. I find it personally amusing to choose a random language, and then (always!) discover a reference in its documentation that supports the point I want to make. I know I can do that because I know that the semantics (and hence the documentation) will invariably follow convention. Except for the PHP documentation, which confusingly (and peculiarly) conflates values and variables in a number of sections.}

I've never seen Lua used in the many companies I've worked for or consulted in.

{I guess you've never worked in the videogame industry. See http://sites.google.com/site/marbux/home/where-lua-is-used }

"Value" is overloaded such that it's used multiple ways. If it's used 2 different ways roughly 50% of the time for each way, then 1 out of 2 usages will fit your pet version and be relatively easy to find an occurrence of. You'd have to demonstrate that roughly 3/4 of all occurrences use your pet version to have a realistic claim of it being canonical. CSV has "value" in the very name, and it lacks a type indicator component, for instance. (It should be "comma-separated literals" if we use terms usually associated with programming languages, but "value" is a loose term. That's just the way it is.)

{I've not seen any evidence that "value" is used for anything except values. "Value" is sometimes used to refer to literals (as in CSV), but that's reasonable -- literals always denote values so it's an appropriate shorthand. Literals, in context, always have a type indicator component which is determined by context. The type indicator depends on the grammar of the system that reads the literal. For example, in most imperative programming languages, 123 denotes an integer value, 123L denotes a long value, 123.5 denotes a floating point value, and so on. In short, the syntax unambiguously denotes the type of the literal, and hence the type of the value it denotes. (By the definition of "string", every literal is a string.)}

There's a difference between saying "Doing X works" and "Doing X is the ONLY way to make it work". There's also a difference between saying "some languages define X as Y" (or document as) and "all languages define X as Y".

{True, but popular imperative programming languages are consistent in their semantics, and I know of no alternative semantics that offer any beneficial alternative to the conventional model described at the top of TypeSystemCategoriesInImperativeLanguages.}

Your version of "semantics" is only in your head. It hasn't been objectively isolated and rigor-atized. Stop assuming your head is the reference head of the universe.

{I suggest you randomly sample the documentation for a variety of popular imperative programming languages, read some introductory ComputerScience textbooks, and examine various on-line sources (e.g., http://www.cs.umd.edu/class/spring2003/cmsc311/Notes/Data/repr.html or http://en.wikipedia.org/wiki/Computer_number_format etc.) for evidence that my version of "semantics" is in a lot of heads, not just mine.}

From your "repr" link: "Representations allow us to write values, which are an abstract idea, in a concrete, preferably, compact way." (Emphasis gleefully added.)

"Abstract idea" == Head stuff! Kapowww! Right in the kisser! Plus, it contradicts your usage of "representation" because it's not meant to be "external" in your model. Further 2, it does not say types are "inside" values.

{As noted above, types need not be "inside" values. It is sufficient that they be associated with values, i.e., that there be a mapping between values and types. Whether this is done via inclusion or composition (i.e., "has-a") or some other mechanism is an irrelevant detail, though inclusion or composition is convenient and intuitive for modelling purposes. Note that many "abstract ideas" in programming have concrete manifestations. "Variable" is also an abstract idea, but a typical imperative program is full of them.}

PageAnchor Ratio_01

And "a lot of heads" may not matter. It's the ratio that's important. For example, if 50% of typical programmers use head model X and 50% use head model Y, then the choice of X selected over Y, or Y over X probably won't make much difference (all else being equal). It only becomes notable if it approaches around 70%. And even then, other trade-offs and conflicts might come into play such that it's a weighing game to make the least evil compromise. It would have to approach around say roughly 85% before other tradeoffs should probably be ignored and popularity alone dictates the model. -t

{You claimed that "[my] version of 'semantics' is only in [my] head," so I demonstrated otherwise. What percentage of heads "my" semantics happen to appear in is irrelevant. It is sufficient to note that they are by no means unique to my head.}

Representations can be "outside" the head, but they are only lossy projections. A painting of "anger" is not anger itself.

{I'm afraid I don't understand your response. "Lossy projections"? "A painting of 'anger'...?"}

Where explicitly did you "demonstrated otherwise"? Yes, notions impact objective things, but that's not the same as saying notions ARE objective things.

{I "demonstrated otherwise" in the paragraphs above. You claimed that "[my] version of 'semantics' is only in [my] head," so I demonstrated otherwise by providing links to documents and sources that share the same semantics. Thus, they are not "only in [my] head".}

No they don't. They call it an "abstract idea", meaning it's a concept in human heads.

{You appearing to infer something from the document that isn't there. There is nothing in the document to suggest that values exist only in human minds. Values are certainly an "abstract idea", but they're also concrete, just like "number" is both abstract and something implemented by machines.}

When something is turned into an implementation, it's RARELY referred to as an "abstract idea". YOU are the one who is "infer[ing] something from the document that isn't there". You read wrong! Your reading is out of step with average humans.

{A "number" is commonly referred to as an "abstract idea", yet every computer implements them.}

Like I said, a painting of "anger" is not anger itself. The computer model is a projection of an abstract idea, not the abstract idea itself. Everybody knows this except YOU. Symbols represent abstraction ideas, but symbols by themselves are not equivalent to "abstract idea". A cave painter can paint a scene depicting "hunting". Hunting is an "abstract idea", but the painting is not hunting itself; it's merely a representation of hunting.

Similarly, if the hunter draws hash marks to track kills, he/she is turning an abstract idea (kill count) into a representation of that abstract idea. The hash marks represent the count, but they are not equivalent to the count.

{So you're arguing that computers don't implement numbers?}

Where did I say that? I'm saying implementations are not abstract ideas themselves but a representation of abstract ideas.

{Yes, of course. That's trivially true. Values are both abstract ideas ("think of a number") and concrete. Functions in C#, for example, return values.}

They are? Can I see them?

{Sure. Fire up a debugger, call a function and watch as the return value is pushed onto a stack by the callee and popped by the caller. If that's not a value, what is it? Ignoring the debugger, look at the semantics: What does a function return?}

It appears we interpret that passage differently. I interpret it to mean that the representation is a concrete manifestation of a mind concept, which is referrred to as an "abstract idea" in that passage. It's rare that a computer is said to "have an idea", but it is common to say that "computers can represent ideas" the same way that hash marks on caves (can) represent numbers.

{Yes, this appears to be a matter of interpretation. I saw nothing in the article that suggested values don't have reality in a computational system.}

What do you mean by "have" here? A specific representation of ideas, yes. But there are usually multiple diff ways to represent the same idea. And even if the writeup doesn't directly contradict your interpretation of "abstract idea", it doesn't shore it up as the ONLY interpretation either. It does NOT provide an unambiguous definition of "value". Thus, the writeup may not contradict your notion of "value", but it doesn't contradict mine either (that it's overloaded).

{I think this has diverged far from the original point, which was simply this: You claimed that "[my] version of 'semantics' is only in [my] head," so I demonstrated otherwise by providing links to documents and sources that share the same semantics. Thus, they are not "only in [my] head".}

I don't see the share-ness. I will agree that your interpretation is often not ruled out in such sources, but neither are other interpretations. English often sucks for tech doc; that's just life.

{If my "interpretation is often not ruled out in such sources", I'll take that as recognition of the fact that the semantics in question are not "only in [my] head".}

No, that does not follow. It does not establish a CLEAR meaning of "value" nor "abstract idea". Not ruling out X does not mean X is true. I cannot rule out that Santa Claus exists, but that does NOT mean he does.

{What "does not establish a CLEAR meaning of 'value' nor 'abstract idea'", and why does it matter? The sources I chose were intended only to demonstrate that I am not alone in my interpretation of programming language semantics, contrary to your original claim.}

It doesn't contradict EITHER of our claims because it's fucking vague! To me "abstract idea" usually means something in human heads, not machines. But it's not 100% established in the writing such that I cannot claim it with 100% certainty. But it is a good illustration of how crappy existing writing is about type-related issues. Perhaps it was meant to convey a general notion, not scientifically valid vocabulary and related tests for matches (falsification). Without the rigor, it's like a Rorschach test: each of us projects our world view into the fuzzy gaps in the writings.

{You obviously found it more vague than I did. No matter, it's merely one example among thousands I could cite to show that I am not alone in my interpretation of programming language semantics.}

No, the actual text is the evidence, NOT your interpretation of it. And the text itself does not clearly back your interpretation of it.

{Your interpretation of that particular article doesn't matter. It's merely one example among thousands I could cite to show that I am not alone in my interpretation of programming language semantics.}

The article does NOT back YOUR particular interpretation of the phrases and terms it uses. The text in the article is the objective subject of analysis, NOT your head's version of it. The fact that it does not rule out your interpretation is not the same as verifying your interpretation to be fully accurate. It doesn't rule out my claim that "value" is overloaded or vague. Further, some sources may use it one way, and some sources another. "Value" currently lacks a clear-cut scientific definition.

{"Value" is defined at the top of TypeSystemCategoriesInImperativeLanguages, and its use is consistent with convention. Again, your interpretation of that particular article doesn't matter. It's merely one example among thousands I could cite to show that I am not alone in my interpretation of programming language semantics.}

The first sentence I agree with. The rest I don't. The cited sources do not validate that YOUR interpretation of the sources is canonical. They only show it's possible. Possible != Canonical. And this can happen with overloaded terms. A thorough analysis would look something like this:

  Total occurrences of "abstract idea": 983
  Total occurrences that can be interpreted as happening on machines: 684
  Total occurrences that can be interpreted as happening in human minds: 594
  Total occurrences that are clearly happening only on machines: 85
  Total occurrences that are clearly happening only in human minds: 92
  Total occurrences that are clearly happening in both: 36
And then each number can be back-tracked to the source document via detailed reports.

{There is no need for such a detailed analysis, unless you're doing research on descriptions of programming language semantics based on literary analysis. (Which might be an interesting piece of work, by the way.) All that is required is one document not written by me, that shares my interpretation of programming language semantics, in order to demonstrate that I am not alone in my interpretation of programming language semantics. As it turns out, the majority of popular imperative programming language manuals would qualify. That, of course, has nothing to do with what is canonical or not, but then this section had nothing to do with what is canonical or not. It was solely about disproving your claim that "[my] version of 'semantics' is only in [my] head."}

{As for determining whether or not there are canonical popular imperative programming language semantics, that is also trivially done: Simply note in how many popular imperative programming languages the statement "a = foo() + 3 * c;" has the same semantics. (Hint: it's all of them. Those with differing syntax still possess a semantic equivalent, i.e., there's some syntax that allows you to specify, in one statement, "multiply the value in variable c by 3, add it to the result of invoking function foo, and store the resulting value in variable 'a'".}

Same syntax is NOT the same as same semantics. You have not established a way to objectively measure semantics. They appear to be a figment of your imagination.

{Indeed, same syntax is not the same as same semantics. I never said it was. What I wrote is that in every popular imperative programming language, the same Algol-derived syntax has the same semantics. In every popular imperative programming language, the syntax of "identifier <assignment symbol> expression <end-of-line>" means "store the value resulting from evaluation of the expression in the variable named by the identifier." I don't know of any popular imperative language where that syntax means anything else.}

{If semantics are a "figment of [my] imagination", then interpreters must be imaginary too, because without semantics all we have is syntax recognition. If all we have is syntax recognition, then we can successfully recognise that syntax like "identifier <assignment symbol> expression <end-of-line>" is correct, but the interpreter won't be able to do anything with it. Semantics are, quite simply, what the interpreter does to the bits and bytes in the machine, given statements that have syntax like "identifier <assignment symbol> expression <end-of-line>".}

Yes, semantics ARE figments of imaginations. They are an "abstract idea" in human minds. Computers process algorithms, not semantics directly. Computers don't have ideas and notions (outside of AI). And even if I agree that semantics may exist inside machines, for the sake of argument, the claim that the semantics for language A are the same as those for language B is not scientifically measurable at this point. There is no clear and useful metric for the same-ness of two or more instances of semantics. SUCH TESTS DO NOT CURRENTLY EXIST. We can measure and compare syntax and grammar rules for computer languages and identify similarities via pattern matching criteria, but there are no known comparable tools for "semantics".

{You are confusing "semantics" as used in the context of human language -- which is about what human language means to human brains -- with "semantics" used in the context of programming languages -- which is about what programming language "means" to computational machines. Programming language semantics are about what the code does with the machine. The tools we use to compare semantics are formal languages, set theory, DenotationalSemantics, etc., or even pseudocode and human language.}

Thus, even IF you pass the machine-existence hurdle, you still need to pass the sameness-measuring hurdle (equivalency?).

{What does that mean? "Machine-existence hurdle"???}

You first need to demonstrate that semantics can exist in machines (interpreters). And then you need find a way to objectively compare two I/O-matching interpreters and say whether their semantics are the same or different. Thus, if semantics exist in software, how can we compare two instances of software to see if they have the same semantics?

{Without programming language semantics, all your interpreter can do is tell you whether the source code is valid syntax or not. Semantics are what valid language statements do with the machine.}

That's called "algorithm", not "semantics". Anything at a higher abstraction than "algorithm" is in human heads and only human heads (barring an AI-based interpreter, which perhaps we can't rule out either). It's true that algorithms can be influenced by human semantics, but that's not the same as being semantics. Influence != Being.

{An "algorithm" is a specific arrangement of operations intended to perform a specific task. "Semantics" are what the operations do, individually and/or as a whole. A "sort" algorithm is the operations required to put a set of items in a specific order. The semantics are "puts a set of items in a specific order." Each operation within the algorithm has its own semantics, like ":=" in Pascal has the semantics "store the value on the right hand side in the variable on the left hand side."}

To me, that's clearly the human head side of things. It's ridiculous to say that those concepts ARE IN the machine. Those concepts may shape/influence the algorithm, but that doesn't mean they are actually in the machine any more than "hunting" IS IN my cave sketch of a hunting scene.

{It's ridiculous to say that those concepts ARE IN the machine? Really? A variable isn't in a machine? There isn't a right-hand-side and a left-hand-side in a typical assignment statement? We'd better tell the compiler/interpreter people about this; it appears they've been coding ghosts.}

It's a representation and/or projection of variables "in the machine". A painting of a hunting scene is not hunting itself. And the left-hand/right-hand thing is a syntax issue. We present ASCII bytes in a certain order by convention, and can use this convention to say a given symbol is "to the left" of another. We have syntax conventions that we can use to describe syntax, but I think most would agree that syntax is not semantics itself. (It's shaped by semantics, but influence is not equivalence.) It is generally acceptable in colloquial-land to say a variable is "inside the machine", but that's probably not suitable for rigorous analysis. When their OS crashes, people often say that "my PC is sad". But that's not intended to be a rigorous or objective statement. People use social and physical terms or analogies to describe equipment and virtual worlds at times, but it would be a mistake to take those literally.

{What is a "representation and/or projection of variables 'in the machine'"? Presenting "ASCII bytes in a certain order by convention" is syntax. What that syntax describes in terms of machine actions, are semantics. "Variables" are defined by the semantics of a programming language. Syntactically, there are no "variables", there are only identifiers (which are character sequences that follow certain rules), symbols, literals, keywords, and the various allowable arrangements of these.}

Re: '"Variables" are defined by the semantics of a programming language.' -- Where exactly? A syntax processor can distinguished between sub-classes of "identifiers", including "variables". The processor can usually say whether a certain byte in the source string belongs to a variable or a function name, etc. (I should point out that some interpreters delay making that distinction until it actually uses an identifier, but informally one can usually tell by reading the source code.) The "semantics of a programming language" is not an objectively observable thing. You might as well talk about unicorns and bigfoot: things we have yet to catch and study. Maybe you are religious and worship "semantic" spirits? Your descriptions are a lot like religious descriptions. Maybe if we approach them like a religion, this whole thing will start to make sense, or at least have a common reference point with something else we are familiar with. Otherwise, I'm a loss to do anything concrete with your claims. To me "semantics" are not a concrete thing, just human thoughts. Human thoughts cannot be measured or observed directly, but only by their impact on other things, and so far we have no universally accepted model of thoughts/ideas/notions by which to communicate about them effectively and rigorously.

{Semantics are described in language documentation and the code that implements compilers and interpreters. That is objectively observable. However, unlike syntax -- where we have semi-standardised notations like EBNF to describe it -- aside from formalisms like DenotationalSemantics and HoareLogic, we don't have standardised notations to express programming language semantics. Nonetheless, semantics exist. Note that "a syntax processor" as you've described it does not deal with notions of "variables" -- except to the extent that some syntax parsers do a certain amount of groundwork on behalf of the subsequent semantic processing, which only occurs once an AbstractSyntaxTree has been created. A token like <variable> in a language grammar isn't recognised as a variable by the parser; its name is only for our benefit. In most cases, <variable> in a grammar specification is merely a synonym for "identifier" (which is itself a synonym for the rules defining an allowable or valid sequence of characters) which only appears in certain places in the grammar.}

Language documentation is vague and overloaded when it comes to "types" and related. If there is no standard notation for language semantics, then it's very difficult to compare and contrast, including demonstrating universality or canonical-ness. It's almost like doing science before math. We need some kind of notation to demonstrate anything objective, or at least mutually agree about some characteristic of the thing being analyzed. I don't dispute that semantics exists, but there is no universally agreed representation for it. "Love" and "hate" and "sorrow" also exist (as ideas or notions in the mind), but we have no universal representation or notation for it to measure/prove exactly where it is and where it is not, nor equivalency; and thus are stuck with messy spoken language as a consolation prize.

{As I wrote, DenotationalSemantics and HoareLogic provide standard notation for language semantics. Would you prefer that we continue this discussion using the DenotationalSemantics? It might help to read http://people.cis.ksu.edu/~schmidt/705s13/Lectures/chapter.pdf and http://users.encs.concordia.ca/~grogono/CourseNotes/COMP-7451-Notes.pdf first.}

I'm not going to get into a definition fight over "variable". One can generally informally recognize one by looking at the source code (at least in terms of syntactical representation of one) and distinguish between function names, language keywords, etc. Now in dynamic languages sometimes names can be reused such that syntactical analysis may not be sufficient to classify accurately. But colloquially, one can identify most from the source patterns and peers will agree, at least if they know the purpose of the program and get hints from intent.

{In other words, you can recognise the semantics of a language by looking at its source code, because similar syntax invariably implies the same semantics?}

As already explained, different AI techniques can identify the same patterns, demonstrating that same output does not imply same processing. You have not demonstrated that all correctly matching outcomes proves a consistent semantics framework behind it. "Single path theory", I shall call it. Same-outcome != same-semantics.

{Can you demonstrate these "different AI techniques" creating an alternative but viable interpretation of programming language semantics? It is certainly conceivable that alternative but viable interpretations of programming language semantics might exist. I have never seen such, and even your model describes conventional semantics with added redundancy to (apparently) avoid "nesting" in your choice of notation.}

No, it's an analogy, not an implementation. I cannot X-ray human minds in sufficient detail and neither can you. What exactly is "redundant" about it? Maybe "analogy" is the wrong word. It's proof that matching I/O profiles does not automatically imply matching processing/thinking. (I consider neural nets "thinking".) It takes only one case to bust a rule that says matching I/O profiles always implies matching semantics (thinking). I/O profile matching may be considered weak evidence to back a particular case, but not sufficient by itself. It may be a prerequisite, but not a sufficient condition by itself to claim equal semantics.

{What's redundant about your model is that when your valvarueiables are used as values, the "name" attribute is redundant, and when your valvarueiables are used as variables, the "type" attribute is redundant unless you're describing Category S languages. I have no idea what neural nets have to do with this, or why you consider them to be "thinking". The rest continues to smack of conflating the semantics of human languages with the semantics of programming languages.}

Sorry, I don't see redundancy. You'll need to be more specific on where exactly the duplication is. As far as neural nets, yes, I consider them to be "thinking". Oh shit, not a LaynesLaw mess over "think". Noooooooo! You guys are the ones using English to convey "semantics", not me. True, there may not be a lot of choice, but that's because semantics has yet to be scientifically tamed. That's not my fault. Until you capture bigfoot, few will or should believe you.

[Formal semantics were "scientifically tamed" before you were born. It's your fault that you don't know this already.]

You haven't demonstrated uniqueness. The existence of an (alleged) semantic model is not proof of uniqueness.

{What is this "uniqueness" you're insisting on, and why?}

Search "prove equivalence".


PageAnchor indirect_types

If all values must include a type component, then your model of variable and its parts in D2 language is wrong, because there is no "type" component for D2. If the type association is so strong in human notion-land (semantics?), then that notion would supersede languages categories.

In category D2 languages, the type is the same for all values. There's no need to put a reference to "string" in any value when a reference to "string" implicitly applies to every value.

One could say the same for just about any casual usage of "value". It's basically a wiggle-room clause to make anything be or have a "type" when you need a type to exist for your pet definitions/models.

You need a type to know what value a representation denotes. For example, given a binary representation of 110110, what is the value?

Plus, I can use your own wiggle weapon against you and say my usage of "value" has an implied type of "string".

In that case, you're only modelling D2 languages.

In my model's case, we are talking about the parts INSIDE the model, not the language being modeled. Different scopes.

And that doesn't matter anyhow if you are talking about UNIVERSAL definitions. If it varies per "language type", then it's not a universal rule, but rather a situational rule.

And most programmers (and form users) barely notice a difference between D1 and D2 in practice such that it's not going to significantly shape their vocabulary interpretations. I suspect you may argue professional interpreter writers care about such subtlety, but PIW's are NOT the target audience.

I don't see how that addresses my point.

Let's see if I can restate this. My model tends to use D2 conventions internally (to keep things clear), but it can model D1 languages themselves. Thus, am not modelling only D2 languages, as you appear to claim.

If your values don't have explicit type references, you're modelling D2 languages and not modelling D1 languages, whether that's what you intend or not.

Wrong. That value thing is your specific model's requirement.

It's more than that: It's a programming language requirement. I simply document it.

Wrong.

What does a function return? What does a variable store? What does an expression evaluate to? If your answer is anything other than "value", you're wrong.

They DO return "values" in my model, but they also return other things. Also, I don't necessarily claim such statements are a universal truth; they could be model-specific. Without an established and clear definition, it's hard to say.

I don't claim such statements are a universal truth either. Note how I've always been careful to use the phrase "in popular imperative programming languages" because in those, functions always return values, variables always store values, and expressions always evaluate to values. I don't know of any imperative programming languages (popular or otherwise) that don't embody this, but if you feel "popular imperative programming languages" is too vague, think of it as being the top 15 programming languages on the TiobeIndex as of November 2013.

Even in the top dynamic languages, "value" is not objectively and clearly defined. Source code has no direct "values" (per syntax diagrams etc.), and so if they exist, they are either internal to the implementation (which is somewhat arbitrary), or an abstract idea in human heads.

Remember, every language requires both syntax and semantics. Without both, there is no language. "Values" may not have specific syntactic elements (though literals denote values), but they have specific semantic elements.

I would test that claim, but I cannot find where the "semantics" are to yank them out and watch it "run" without them. That's not sarcasm either: one of the best tests for the claim "X requires Y to work" is to yank out the Y and see if it stops working.

This can be done trivially. Download the RelProject, install the source code under Eclipse and verify that it runs by launching Rel3DBrowser:DBrowserWithoutMonitor. Then go into each of the low-level virtual machine operators in Rel3Server:ca.mb.armchair.rel3.vm.instructions.* and replace the current definition of "execute(Context context) with "public final void execute(Context context) {}". That eliminates the semantics by turning every language statement into "NoOp". Run it and see what happens. Hint: When you run scripts, no databases will get updated, no files will be opened and read or written, no output will be produced, nothing will happen. Programs will be recognised as syntactically valid (or not), but they won't do anything.

You are yanking out the algorithms, not semantics. We already agree they need algorithms to work. Your arrow is bigger than the target. Can you leave the algorithms in and yank out the semantics separately? (And I won't let you trick me into installing RelProject. Nice try.)

Algorithms are used to implement language semantics. In the RelProject, algorithms implement the TutorialDee language semantics. By "yanking out the algorithms" that implement the semantics, we remove the semantics. The language semantics are reflected in the interpreter code by the names of the machine instructions, e.g., OpAdd, OpRestrict, OpJump, OpConcatenate, and so on. The semantics of OpAdd or "+", for example, is "numeric addition on integer values" in English (that's what it means to us) and it's implemented in the "OpAdd" class of the ca.mb.armchair.rel3.vm.instructions.core package by the following (simplified, for illustration) algorithm: "v1 ← pop(); v2 ← pop(); v3 ← do_integer_addition(v1, v2); push(v3));" That algorithm is what "+" "means" (I'm using a deliberate anthropomorphic term here) to the machine.

I thought "semantics" were at a higher abstraction level than PUSH, POP, and JUMP instructions. That's arbitrary (non-unique) implementation detail. Let me put it this way; the kind of semantics were are interested in for this topic are those that typical programmers share and use, not chip-level issues. Let's not drag chips into this if we don't have to, it's messy enough as is. And why do you have to deliberately anthropomorphize it? Perhaps because (non-AI) machines don't have notions and ideas? And if they did, how do we measure it, such as detecting its presence or absence or equivalence?

Semantics are independent of the abstraction level. Semantics describe "what"; algorithms describe "how".

We've been over this already, "What" is vague in this context.

In this particular context, I've been deliberately vague. In the context of actual language statements, it's not vague at all. "while (p) {}" is a loop. That's "what" it is; there's nothing vague about that. "How" involves details of the mechanisms of expression evaluation, branching, etc.

"Is" implies some kind of equivalency, or at least membership to a designated class/category/type. What objective equivalency/membership-verification device are you employing here?

[Looking at the language definitions, of course. Can you find a popular language where "while (p) {}" isn't a loop? I tried to and couldn't.]

If one builds a regex-like tool to classify such syntax patterns as "loops", does that mean it's using the "same semantics" as programmer heads (or your head)?

{I don't understand the question. What does "programmer heads" have to do with programming language semantics?}

What is performing your "isn't a loop" test above?

{I'd expect it to be the test code that verifies whether or not the code inside '{}' is repeated as long as 'p' is true, but you can also examine the language documentation or the source code for the interpreter or compiler to see what it's meant to do.}

An objective loop detector algorithm? That'd be cool if you can pull it off without relying on syntax. But if we are talking about "typical Algol-style languages", then syntax is probably sufficient, and humans can also use such "historical" syntax patterns to say "yes, it's a loop". They don't necessarily need higher-level thinking to pull it off. In other words, certain syntax patterns are usually associated with loops out of tradition and convention. To "fail" your test above, somebody would have to decide to violate convention. There is no rational purpose to do such, other than jack around with people or hide something.

{It's not a loop detector, but simple code -- written in the language under test -- to verify whether it's looping or not. The only reason "typical Algol-style languages" are "typical" is because they share both common syntax and common semantics. If they shared only common syntax, we'd be talking about how exactly the same statements do completely different things. We'd be asking why "a = foo() + b" assigns the result of invoking 'foo' plus 'b' to 'a' in C#, but rotates object 'a' about the 'foo' axis by 'b' degrees in <hypothetical but imaginary popular imperative programming language 'x'>. Therefore, the convention to which you refer is both syntactic and semantic.}

But this "semantics" is a notion-y human head thing. It's not an objectively dissect-able entity. There are also processing algorithms/rules that are typically associated with such syntax patterns. But to call that "semantics" may be a reach. It could be merely tradition/habit. A tool is built that way because somebody started building it that way in the past and people liked it enough to keep using the same syntax-to-algorithm patterns. It's an association between syntax patterns and processing algorithms that humans either like or live with due to comfort or something else. To say it "means" something may be giving it too much credit. It may merely be a useful association habit. To say it "means" moving foo to bar or whatever may not be necessary or accurate: people see the pattern and associate a reaction based on that pattern and past experience. It does not mean they are translating it into a clear set of steps or higher-level meaning any more than we recognize a cat when we see a cat even though we are at a loss to precisely and completely describe how we mentally do that and tell it apart from possums or mountain lions.

{No, you're insisting that "semantics" is a notion-y head thing. Semantics defines, for example, that a variable is a language construct that can be assigned values and that has a name. A programmer implements the semantics of variables in code by constructing mechanisms (using algorithms, yes) to assign values to variables, retrieve values from variables, and give variables names, and then associates aspects of those mechanisms with language syntax.}

Like I said, human mental semantics SHAPES such algorithms, but that is not the same as semantics "being in" the interpreter. Influence != Equivalence. And I don't see how your (non-unique) English version of an assignment statements supports your claim. You keep giving those English snippets, as if repetition will convince me. To me it's just a lossy translation of your personal pet head model into English. That's all; nothing more. -t

{Would you prefer that I used some formal notation instead of English? A formal notation would eliminate possible ambiguity, but for informal discussions -- as opposed to mathematical reasoning -- English is generally friendlier. Semantics are "in" the interpreter; there is invariably (out of necessity -- semantics become code in real interpreters and compilers) a correspondence between semantics and the code that implements them. My English version of an assignment statement is the only way assignment is defined in every popular imperative programming language. If you disagree, please provide a counter-example.}

I did provide counter examples already. You ignored them, multiple times. Your English version is not the only way to describe such. If you mean it's the only way "defined", then show where this canonical definition is, beyond anecdotes. And if you use a formal notation, you will need to demonstrate it's truly capturing "semantics" and not something else (such as an algorithm or observations about an algorithm).

{What "counter examples" did you provide to show that there are different interpretations of assignment? In particular, how do they contradict my descriptions?}

{Algorithms that implement a language aren't visible in a language, so we can't capture those or observe their behaviour except as behaviour in the language itself. Therefore, if we're not capturing syntax, we must be capturing semantics, because a language is defined by its syntax and semantics. There isn't anything else.}

But you could be creating an alternative algorithm or portions of an algorithm, but calling it "semantics". You don't have to see the original (implementation) algorithm to make such a mistake.

{I don't follow you. Could you give an example?}

Dr. Fobiz could create a notation that he/she claims represent "semantics". But really it's just another way to define algorithms (of say an interpreter), or parts of algorithms. How does one know if such a notation is actually representing "semantics"? We shouldn't just take Dr. Fobiz's word for it.

{If defining algorithms or parts of algorithms is what a language lets us do, then it involves (only) syntax and semantics.}

How does this relate to the scenario? And how does one detect/verify "only"?

{Language consists of syntax and semantics, by definition. If you're not talking about the former, you're talking about the latter, and vice versa.}

I still don't see how this relates to the Dr. Fobiz example. You lost me.

{Regardless what Dr. Fobiz creates a notation to do in a language, if it's a language, it's syntax or semantics.}

I meant testing if it represents the semantics the Doctor claims it does, not that it contains some random semantics regardless of source or purpose.

{You mean, what if the Doctor lies about the semantics of her language? Then the statements that programmers write in the Doctor's language won't behave as the Doctor described.}

It's not a programming language. I repeat, "Dr. Fobiz could create a notation that he/she claims represent "semantics"."

{I don't know what that means. Do you mean a meta-language designed to describe the semantics of a programming language, itself (necessarily) using syntax and having semantics? That sounds like DenotationalSemantics.}

That's what influenced the Dr. Fobiz question/scenario to begin with.

{Sorry, I'm not following you. This "Dr. Fobiz" section has lost me.}

How do we know that DenotationalSemantics is really denoting semantics? It may just be converting between programming notations and the author merely claims the new version is "semantics". It looks almost like an Algol-style to Prolog converter, but I am probably not reading it right.

{If it makes reference to a given language and it's not denoting syntax, it must be making reference to semantics.}

What exactly do you mean by "making reference to"?


 Dynamic language -

a = 3 b = "4" c = "C"

add(a, b) returns 7 concat(a, b) returns 34 add(a, c) fails

Type "tag" is not needed.

Feed the wrong type to a process, it fails.

Those particular operations don't necessarily need or use a type tag. However, that does not mean that the entire language has/uses no tags. Other operators may potentially expose the existence of a tag. In short, it's an insufficient test set. Overloaded operators are often a source of difference. What are you trying to illustrate anyhow? -t

I should point out that one may never know for sure if a language uses type tags, barring dissecting the interpreter EXE. ColdFusion acts like it has no tags (for scalars) so far, for example, but that does not mean that a case may someday be found that can only be (or best be) modeled/explained with tags.


Procrast-A-Tron

As a though experiment, it may be possible for the interpreter/model to not evaluate expressions until it absolutely has to. In other words, store the expression itself (or partially evaluated sections of it), and collect up (append or prepend, perhaps with parentheses) such expression until the last possible moment, perhaps at the output operation. There may be no need to finish each statement at the end of a given statement. LazyEvaluation? -t

Sure LazyEvaluation is fine; it's fundamental to certain FunctionalProgramming languages like HaskellLanguage. It's trickier in imperative programming languages, because side effects may require the developer to consider the effects of LazyEvaluation. Otherwise, it's invisible to the programmer.

For example, many DBMS engines use a form LazyEvaluation to only retrieve rows or tuples as requested by the query result consumer, so that (for example) "SELECT * FROM VastTableOfDoom" will not retrieve any more rows than are asked for by the process that sent the query. If that process never looks at the rows in the resulting RecordSet, the DBMS won't retrieve even a single row. If that process only displays ten rows (say, on a Web page), then only ten rows will ever be retrieved from the DBMS and its VastTableOfDoom. That's also true of a query like "SELECT * FROM VastTable1, VastTable2", which is an outer join that could produce a RecordSet of COUNT(VastTable1) * COUNT(VastTable2) rows, except that if the query process only ever displays five rows, the outer join will only ever process and produce five rows.

However, in general LazyEvaluation does not change what a statement like "a = foo() + b;" does; it's an optimisation rather than a change in semantics.

Without an objective way to compare instances of semantics, I cannot verify that claim.

You can objectively compare semantics using DenotationalSemantics, HoareLogic, or other formalisms.

Can you use those to prove there is some kind of universal or canonical "semantics" in typical dynamic languages? Remember, you cannot just show one possible "implementation" (for lack of a better word), but must also demonstrate your claimed patterns are either the only possible semantic patterns, or are those commonly used by regular developers (who many don't know or care how interpreters are actually implemented, I shall remind you).

In principle, yes. I can use DenotationalSemantics to formally demonstrate the same semantics for a set of languages. It would be a lot of work, though, and it would merely be a formal presentation of what can be shown trivially: Allowing for (slight) syntactic variation, a statement like "a = foo() + 3 + c;" or constructs like "if (p) {} else {}" or "while (q) {}" have the same semantics in every popular imperative programming language. Do you disagree? If so, can you show how any of these examples varies in semantics from language to language?

Regarding DenotationalSemantics, see PageAnchor Hoare_Horror. And the bottom question is trying to flip the burden on me, which isn't going to work. The default is not that semantics are the same. I have no way to rigorously compare "semantics" any more than I can compare "hate" in one man's mind to "hate" in another man's mind. Human ideas have have not been universally codified so far. If you claim they are the same then rigorously show they are the same. Otherwise, the default is "unknown". (External notations, such as programming code, are "lossy" representations of human ideas, or at least have not been proven to be complete representations of human ideas.)

The bottom question isn't "trying to flip the burden" on you, but simply to emphasise the point immediately before it: Allowing for (slight) syntactic variation, a statement like "a = foo() + 3 + c;" or constructs like "if (p) {} else {}" or "while (q) {}" have the same semantics in every popular imperative programming language. That is indubitable. I have no idea how "human ideas" is relevant, unless you're still erroneously believing programming language semantics exist only "in the head".

Semantics outside the head cannot currently be scientifically analyzed in a rigorous way, REGARDLESS of whether it exists or not. (Same probably applies inside the head also). Thus, to say anything specific about these possible in-machine semantics is likely futile. IF you find a way to scientifically isolate and analyze and objectively compare in-machine semantics of interpreters, you may deserve a Nobel/Turing prize. Until then, stop pushing this fuzzy issue. We are perhaps wondering off topic anyhow; what does this have to do with "values"?

Semantics can currently be scientifically analysed, in a rigorous way, using DenotationalSemantics. However, in this case, it's so obvious as to be unnecessary -- there is no need for DenotationalSemantics to recognise the inescapable fact that "a = foo() + 3 + c;" or constructs like "if (p) {} else {}" or "while (q) {}" have the same semantics in every popular imperative programming language. If you disagree, please identify a popular imperative programming language where the semantics of any of the three quoted examples differ in semantics.

You said the same thing earlier re "obvious". I won't repeat my response to it. If you can demonstrate commonality using DenotationalSemantics, please do. Fuzzy words aren't cutting it. And remember that the existence of a DenotationalSemantics for such by itself does not establish commonality; it may be one "encoding" among many possible solutions for the same translation/problem.

[The existence of other possible solutions is a non-issue. If there exists a single semantic for all of them, that is sufficient to prove (you know, that item at the top of the EvidenceTotemPole) that there is a common semantic. All of the languages that I know of (c, c++, c#, php, etc.) use the same semantics for those constructs. Can you come up with any that don't?]

For the sake of argument, I'll agree for now that you showed that a common semantic CAN represent our target language set. However, that does not by itself prove that there are NO ALTERNATIVE semantic models that can do it ALSO. Thus, if your semantic model has your kind of "values", that alone does not mean that all possible semantic models have your kind of values. This is what I mean by the "uniqueness requirement". Put a slightly different way, showing that semantic model A is valid for set X does not rule out the existence of another semantic model B that is also valid for set X. What you need to show is that ALL POSSIBLE (valid) semantic models for language set X have feature V (V = your style of "value", or whatever else you claim is canonical or necessary). -t

[We haven't claimed that there are no alternative semantic models. There are, in fact, infinitely many. But, they will all be equivalent to the common semantic (just like 1+1 is equivalent to 0+2 by virtue of evaluating to the number 2). So your insistence that we prove uniqueness is just a RedHerring.]

Arrrg. Fine, then prove equivalence. I shouldn't have to ask. We have an objective (or agreed-upon) way to do it with real-number expressions, so make/find/invent the equivalent equivalence-finder for "semantic math". You accuse me of RedHerring, so I'm accusing you of delay tactics. You act like you opened Pandora's Semantic Box, but don't know how to use it right, inventing layers or excuses and what-if's to delay. (Technically, "Feature V" could be "equivalence of your Value thingy". I didn't rule that out above so technically I didn't exclude equivalence. If you want to get anal...) Let me offer an improved version:

   What you need to show is that ALL POSSIBLE (valid) semantic models for 
   language set X have feature V or its equivalent.

[Actually, we don't have a unique way to do real-number expressions. The two most common are Cauchy sequences and Dedekind cuts. There are infinitely many other ways to do it that we haven't bothered to name as well. They are all equivalent in that they assign the same truth values to sentences in the language of real numbers. This is, in fact, exactly what is happening with the semantic models for computer languages. As for your proof, in order for two semantics models to be models of the same language, they must give they same results. Hence, they must be equivalent. QED.]

What is "same truth values"? I've been out of college for a long time, so have to ask such things. Re: "In order for two semantics models to be models of the same language, they must give they same results." -- I don't disagree, but how does that relate to our discussion? We need to demonstrate equivalent semantics, not equivalent languages. Or perhaps I don't know what "they" refers to. Anyhow, for arithmetic, most humans with some education have agreed on a standard notation and "processing rules" (for lack of a better term) for this notation. And we've agreed on what "equivalance" means in that notation such that we have an agreed-upon "algorithm(s)" that tells us that "a + a" is equivalent to "2a". More specifically, the notation's rules allow us to convert a+a into 2a and back unambiguously. To do the same thing for semantics, first you need to establish or find an agreed-upon notation and it's rules, and then us it to show that all target languages have equivalence, or at least equivalent parts (such as your version of "value").

[The "same truth values" means that the truth values are exactly the same. If one model says 1+1=2 is true, then the other model, to be equivalent, must also say 1+1=2 is true. Similarly with false statements. (Since there aren't any other truth values, we are done.) As for your notational rant, we haven't all agreed on a standard notation, we use a wide variety of notations. E.g. I + I = II. 1 + 1 = 2. One plus one equals two. etc. In addition, there is no need for an agreed-upon "algorithm" that tells us that "a + a" is equivalent to "2a". "2a" is defined, by fiat, to be "a+a". It is, to bring this back to the larger discussion, the semantics of "2a" that allow us to go back and forth between "2a" and "a + a". To show that different semantic models are equivalent, we only need to show that they give the same answer to question about semantics. I.e. What I showed in my previous response.]

There's seems to be ambiguity about whether we are comparing equivalency of semantic models, languages (syntax), and algorithms. Maybe we need to invent a pseudo-notation where we can state rules something like:

   if semanticsAreEquiv(A, B) And languageHasSemantics(lang01, A) And Foo(lang02) Then
      print("Languages Are Equiv:", lang01, lang02)
   end if


PageAnchor manuals_472

Continued from ValueExistenceProof. It got so full it caused a server-side Perl memory error. (I've never seen that before on this wiki.) -t

"Works" simply refers to what the language does with the machine, which is described in every programming language reference manual, and which is known precisely by the author of every language implementation, and by everyone who reads and comprehends its source code.

So you claim. And you agreed that Php's online documentation was poor, at least in areas related to types. Thus, at least for SOME languages, one is left to their own devices and experiments to model type-related behavior. The rest are vague also in my experience. Nobody has figured out how to do it well, so the authors all copy prior garbage's traditions, keeping the garbage and fuzz alive.

Whilst PHP's manual is notoriously bad, most online documentation for popular imperative programming languages is quite good. However, it almost invariably assumes prior knowledge of both programming and computer architecture. Lacking one or both prerequisites, I expect some might find language references vague, and lacking the latter would make almost every technical description seem vague, obscure or opaque.

Then mine them for the best definitions of words such as "type" and "value", and bring them forth to this wiki to be analyzed. Generally my experience is that most people like concise crisp foo-bar/hello-world style examples and don't dwell on explanations, especially for something as ethereal and overloaded as types.

That's what I've done at the top of TypeSystemCategoriesInImperativeLanguages.

I see a model(s), not samples, at least not thorough samples. I've created some code snippet example pages on this wiki also about dynamic types. But, I'm striving for a prediction model also.

The samples I've provided are illustrative of all possible cases. More samples would be repetitive. There's no need to provide a sample of both "1234" and "1235", to indicate that it works with both even and odd numbers, for instance.

I thought you admitted that you only target the most popular dynamic languages, and thus ignore possibilities such as "soft" polymorphism in D1 languages? And then I pointed out that I do consider such possibilities because domain-specific and product-embedded dynamic languages are relatively likely to be encountered even though they don't make the top of the charts. (I'm not certain the popular D1 languages don't use soft polymorphism, but won't make an issue of it here.) If you want to agree that our models are for different purposes or language sets, that's fine. I have no problem with plurality of models.

By "all possible cases", I meant in the context of popular imperative programming languages, and I don't limit it to DynamicallyTyped languages. If I see evidence that any popular imperative programming language employs "soft polymorphism" as part of its operator dispatch mechanism, then I'll include it. I've not yet seen such evidence. I know of no programming language, popular or otherwise, that explicitly provides "soft polymorphism" as default behaviour of its built-in operator dispatch mechanism, though it can be added explicitly via the subtyping-by-constraint mechanisms of TutorialDee.

That's fine, we'll agree that our models are optimized for different territories, or at least the territories have different borders.


{How do you distinguish fuzzy writing from fuzzy understanding?}

To be honest, I can't. But I've seen good technical writing and know what works (at least for my WetWare), and that is NOT it.

{I think if all the technical writing you've read still hasn't clarified values, types and variables -- pretty fundamental stuff -- maybe you should try some exercises to think about it differently.}

I did, and came up with the Tag Model and TypeHandlingGrid.

{I'd be curious to know how you think programming languages actually implement values, types and variables, as opposed to how you'd prefer to model them.}

Why does it matter? We have enough issues on our plate such that we shouldn't invent more without reasons. I'm not attempting to mirror actual interpreters, at least not as a primary goal. They are designed for machine efficiency, not necessarily human grokking efficiency. Different primary goals.

{I think it might significantly help me to understand and appreciate your model, and its justifications, if I understand your interpretation of values, variables and types.}

My interpretation of implementation? Or how I personally prefer to think of them? Those may be two different things. In my opinion, it is probably not good to get hung up on such. The tag model does not require a solid definition of any of those; the names used for the parts are merely UsefulLies for the purpose of the model and are not intended to fully match actual usage (and I doubt any model could match 100%).

{If your "interpretation of implementation" and "how [you] personally prefer to think of them" differs, then it would be most helpful -- and appreciated -- if you would discuss both.}

My description of the tag model is pretty close to how I think of the processing now. Whether it was always that way, I don't know. I don't remember. But in general I mentally use or start with an open-ended structure(s) something like this:

  <object name="..." scope="..." type_tag="..." value="..." etc="...">
A given situation uses attributes it needs and doesn't use those it doesn't need. That way, I don't have to pre-classify everything. Trying to classify and label everything can drive one nuts and is often confusing, unnecessary, and inflexible. (They may nest as needed, such as for arrays.) If I was going to implement it with machine speed and efficiency in mind, THEN I'd try to classify the entities involved into distinct categories and make hard-wired structures. And the resulting structure/entity names and layouts may differ per language.

{That certainly addresses "how [you] personally prefer to think of them". How about the "interpretation of implementation"? Do you believe that is how interpreters and compilers work internally?}

I'm not sure I understand the question, because I didn't describe an actual implementation, but rather the process of going from a draft design or a non-resource-optimized design to one that is. In practice they usually probably push your style of "value" on stacks, but I am not an expert in that area.

{I originally said, "I think it might significantly help me to understand and appreciate your model, and its justifications, if I understand your interpretation of values, variables and types." You replied, "My interpretation of implementation? Or how I personally prefer to think of them?" I answered, "If your 'interpretation of implementation' and 'how [you] personally prefer to think of them' differs, then it would be most helpful -- and appreciated -- if you would discuss both." You responded with your "<object name=...>" example. I said, "That certainly addresses 'how [you] personally prefer to think of them'. How about the 'interpretation of implementation'? Do you believe that is how interpreters and compilers work internally?" Does that clarify the question? I'm asking you how you think interpreters and compilers implement values, variables, and types.}

I don't think about it much. I'm happy with a predictional model.

{That's fine. Think about it for a moment, and then let me know how you think interpreters and compilers implement values, variables, and types.}

Why should it matter? They are designed and tuned for machine efficiency and thus may not represent a model convenient for human dissection. In fact there is no reason for an actual interpreter to follow human notions whatsoever if mirroring such notions results in a performance cost compared with the alternative. (Assuming error messages are still useful to humans.) I've written lab-toy stack-based expression interpreters in college (integers only to keep is simple), and they didn't match the tag model, but again I don't see how that matters. Stacks are not "natural" to humans for expression evaluation and so I don't see anybody arguing to keep them in a type-analysis model, unless one's goal is learn how to make production interpreters.

{Why should it matter? As I wrote above, it might significantly help me to understand and appreciate your model, and its justifications, if I understand how you think interpreters and compilers implement values, variables and types.}

Like said, I don't think that much about how they actually do it. I will agree they probably model "values" similar to how you do it, because it's machine-resource-friendly to do it that way, as already explained.

{How about variables? How do you think they're implemented?}

I don't give them much thought either. Maybe I did in college many moons ago, but I don't remember my thought patterns back that far.

{Why are you evading giving an answer?}

I have no known way to turn my speculations into words at this point, and won't make a stronger effort without a sufficient justification.

{I suspect your evident lack of concrete understanding is precisely why you feel variables and values should be conflated.}

Yes, I possibly do lack a concrete understanding of how interpreters are ACTUALLY built. But we are not discussing production interpreter construction. It's only possibly an issue if my model produces wrong output.

{As I wrote above, it might significantly help me to understand and appreciate your model, and its justifications, if I understand how you think interpreters and compilers implement values, variables and types. Whether your model produces wrong output or not is something I shall determine when TopsTagModelTwo is complete. Until then, I would only be speculating about the as-yet unwritten parts of your model.}

What unwritten part are you most curious about?

{I'm not curious about any unwritten parts of your model. How can I be curious about something that isn't there? What I meant is that in order to determine whether your model produces wrong output or not, I have to make assumptions about the parts that haven't been written yet.}

Whatever. If you have a question about "missing parts", then ask.


Assumption of Notion Existence

For the sake of argument, even if there were a very common and solid notion of your flavor of "value", what's wrong with breaking that notion to simplify a particular model? We can add disclaimers to make sure the deviance is known. Is it too "sacred" or important to dispense with? Road maps quite often dispense with actual stuff to simplify their model of roads. Light-colored cement roads are still printed in the same dark ink as dark roads, for example. And the road width is often not to scale. Plus, it's a compromised projection of a round surface into flat paper. (Rounded paper is not impossible, just impractical.) The map reader knows the maps lies about many things, but accepts those lies to keep the map easier to read for the purpose of navigating roads. The road-map lies to tune it to a specific purpose. Why can't a "type" model do that same?

{Your model's conflation of values and variables is complex (your model demonstrates StampCoupling), confusing, contrary to common understanding, and implies that your value/variable conflation can be assigned to in contexts where assignment is pointless.}

You keep claiming it's more confusing etc. than your model, and I call bullshit. Repetition of bullshit doesn't make it non-bullshit. You just got used to a particular (machine-centric) model and mistake it for "common understanding". As far as StampCoupling, it's only a suggestion, not a "hard" rule, and needs to be weighed against other design "suggestions" and techniques. (See also WaterbedTheory and CodeChangeImpactAnalysis).

{Even if we accept StampCoupling, it's still confusing to treat values and variables as the same, and it implies that your value/variable conflation can be assigned to in contexts where assignment is pointless.}

But I don't have your kind of "values", so there is nothing to treat as the same. Thus, there is no stamp to couple. An analogy would be Dell using its own computers for its own employees. It "externally" sells PC's to the public, but it can use those same PC models internally for its own employees and operations. (Sometimes this is called EatYourOwnDogFood.) The public can't touch Dell's internal PC's because they are in locked or secure buildings. This kind of variable usage has the advantage over your "value" of having a model-user-viewable reference name, which makes examining sub-parts easier. And it avoids the need for a nested structure; which appears to be poor normalization (at least in this model, which has no stacks etc.). As far as "it implies", that happens in your head, not reality, and probably not in others. I have no idea what whacky notion made you dream up the "foo()=x" scenario. And even in the rare case the model user smoked the same mushrooms as you to get the same idea and jury-rigged it in, it doesn't do anything; it's a glorified no-op. (Well, I suppose with enough mushrooms one could define and implement reverse assignments to mean "reformat the hard-drive and display an animated GIF of green monkeys flying out of Batman's ass". Oh wait, it's a reverse assignment, so the monkeys should fly into his ass. Semantics.)

{You may not have my kind of "values", but you use the same structure to represent variables and what is returned by functions and what expressions evaluate to. Since variables can be assigned to, and since you're using the same structure to model variables as you use to model both what is returned by functions and what expressions evaluate to, then it follows that you can assign to that which is returned by functions and you can assign to that which expressions evaluate to. If you can assign to that which is returned by functions, and assign to that which expressions evaluate to, then it follows logically that the assignment "foo() = 3;" and the assignment "1 + 2 = 5;" are both valid.}

You are lying or delusional. My model is not forced to do that. Stop being stupid already; I'm tired of debating such a preposterous and distracting claim. There is no gun to its head to force it accept and/or process such syntax, nor God of Logic forcing it to via lightning bolts. Your head is playing tricks on you. Reboot your head and try it again.

{No one said your model is "forced" to do anything. What your model does is permit assignment in places where assignment wouldn't normally be permitted. It permits building languages where the assignment "foo() = 3;" and the assignment "1 + 2 = 5;" are both valid. It's true that there is "no gun to its head to force it [to] accept and/or process such syntax", but there's nothing to prevent it, either.}

   class Variable {
       private String name;
       private Value value;
       Variable(String name, Value defaultValue) {
          this.name = name; 
          setValue(defaultValue);
       } 
       void setValue(Value value) {this.value = value;} 
       Value getValue() {return value;}
   }
   class Value {
       private bitstring representation;
       private Type type;
       Value(bitstring representation, Type type) {
          this.representation = representation; 
          this.type = type;
       } 
       bitstring getRepresentation() {return representation;}
       Type getType() {return type;}
    }

    // Example: fancy-object-39
    <object name="..." scope="..." mut_type="<var>|<internal-var>|<const>" type="..." value="...">
Another thing, if I replaced my XML structure with yours, the kit API's would do pretty much the same thing they did before such that if they have a flaw that allows bad assignments (for the sake of argument), it's not due the data structure.

{Then the flaw in the data structure extends to the API.}

As a semi side note, I believe that some OO languages do allow something similar, as a shortcut for accessors whose logic can be redefined. But I don't remember the language. Thus, "myObject.setFoo(x)" and "myObject.foo=x" would be equivalent, and how the setting is done is up to the class coder such that it can trigger a function (setFoo method) to run. But I'm not necessarily endorsing such.

{The language is C#. The semantics are those of typical accessors (i.e., get and set methods), but with syntax to make them look like references to member variables. E.g., given "class C {int V {get; set;}} C c = new C();", you can write "c.V = 3; Console.WriteLine(c.V);" which is effectively shorthand for "c.SetV(3); Console.WriteLine(c.GetV());" It isn't the same as assigning to expressions or function invocations -- e.g., "3 + 4 = 2;" or "foo() = 3;" -- which your model and its underlying API would apparently permit, because variables, expression evaluation results, and function invocation results are all represented by the same mutable structure, using the same API to access it, that permits assignment.}

I'm not sure that's the language. The language I remember allowed one to redefine the "set" method to be anything they wanted, and would be roughly equivalent to "foo()=3".

{Perhaps you're thinking of C++. It allows you to redefine the '=' operator to be something you want, including things that aren't assignment, but then that's a change to semantics. There's no way to redefine '=' to assign something to a function invocation or to an expression.}

{By the way, given that C++ is a language where you can redefine most operators to change their semantics, an expression like "a = foo() + b" can be made to mean something like "rotate object 'a' about the 'foo' axis by 'b' degrees". Many argue that doing so is questionable, because it no longer reflects canonical semantics.}

Anyhow my model only "permits" it if you break it or don't finish all parts of an interpreter built with it. Your model can also be jury-rigged. I'm tired of arguing over this; it's a silly obsession of yours. Do Not Feed The Obsessive. ENOUGH!

Please, find something more interesting to obsess on.

{It's not my obsession, but your model's serious flaw. It can be fixed easily by distinguishing variables from values.}

Sorry, you are full of it. And using your nested structure probably wouldn't fix the "problem" because it seems to lean toward the API instead, but I'm not certain because I don't even know where this alleged problem is; I'm just guessing based on your vague description. The parser would typically reject such because expressions are not allowed/expected on the left side of an assignment statement. There's no reason to allow parenthesis or plus signs on the left of assignments. In fact, one would have to explicitly tell the parser how to handle such; it wouldn't accidentally show up. That's two hurdles, at least, it has to pass thru: 1. Syntax checking, 2. Translation into API calls. And even IF those two hurdles break down, it would be like a no-op instruction anyhow; not affecting the output. Do you know what a no-op instruction is? I don't even know why I'm entertaining your silly notion. I should ignore it instead of feed stupidity, making it grow. That's why you can't talk to regular programmers, you get mentally caught up in dumb little things that regular people don't care about. "Ooooh, but this bicycle helmet doesn't prevent you from sticking a pickle in your ear! Oohhhhh ohhhh ohhh!"

Is your claim this?: My model (allegedly) puts more burden on the syntax/parsing side of things, whereas yours can catch such in a later stage if the syntax/parsing side breaks down or is omitted?

{Essentially, yes. My model cannot implement a situation in which values or variables are assigned to values, or where variables are assigned to variables, or where expression evaluation results in a variable, because values and variables are distinct.}

The issue is accepting such statements, not the name of the intermediate parts. I see nothing in your model that would prevent the "foo()" being converted to an intermediate value. (It may be ignored, but the equiv is ignored in my model also, if parsing lets it thru, that is). Anyhow, it's a non-issue because the parser and API converter would normally not even recognize such a configuration; one would likely have to go out of their way to implement it (depending on algorithm used). Enough about this red herring.

{Even if the result of foo() is an intermediate value, you can't assign to it.}

In a section above, I've asked what specifically what prevents such.

{The definitions of variable and value. The former is mutable and therefore permits assignment. The latter is immutable and therefore does not.}

Search for "say what?"


Do you agree that for debugging and/or experimentation, having mutable and addressable (ID'd) "values" can be useful?

{No. A value is such a trivial structure that requiring it to be "mutable and addressable" is unnecessary. If a value needs to be "addressable", it is normally contained in a named or indexable structure. That's the basis for constants in programming languages. I can't imagine why a value would need to be mutated. How often do you need to turn a 3 into a 4?}

During debugging to test alternatives. "Contained in a named structure"? I thought you were against that because it violated some secret Mason's semantics code or something. Anyhow, in ANY interpreter source code, the implementation of "value" will be potentially mutable. One can put an object wrapper around such to "lock it" from changing once it's created, but one can alter that code and remove the lock if they wanted. I chose to keep the implementation simple, but if you want to put locks on YOUR version, that's perfectly fine. I'm not dictating the absence of such features. A model/kit user can do all the SafetyGoldPlating they want. I don't forbid it. It's quibbling to complain about such.

{What kind of "test alternatives" involving mutating a 3 into a 4 might you need to do that can't be done by using the 4 value instead of the 3 value?}

There have been many times where I wish I could test alternative values/inputs without altering the source code or data to fudge it during debugging. I cannot name a specific UseCase that's easy to describe at the moment. But that's moot anyhow because one can put locks on any part of my model/kit they want. I don't forbid it. Lock away! if you feel that kind of thing is important. I doubt the vast majority of programmers would give a fudge, but if it's your thing, do it. You can even modify it to have "representation" instead of "value". Rename the parts and shape it to make your mind comfy and happy. You can even glue tits onto it if you are feeling horny.

{Your model makes it impossible to "lock away" values without simultaneously having to "lock away" variables, because you use the same structure for both. Re "glue tits onto it", etc., you're being childish again. Stop it.}

No it doesn't. Add another attribute to the Variable XML structure that the accessor API's use to prevent changes. We've been over such already. One could also create two different "kinds" of variables (objects/things), but that complicates the model too much in my opinion. I'll value simplicity over "safety" since it is not meant for production interpreters. Plus, I want to leave in the ability to fiddle with internal values for experimentation.

But you are welcome to adjust your copy any way you want. It's a free country. Instead of complain about it, build your own. Unless you come up with solid evidence it violates "common semantics" or something similar in a big and clear way, there's no justification to keep complaining about my version. You've failed to clearly measure "common semantics". Your version/interpretation of "common semantics" comes from your personal interpretation of your favorite written sources, NOT a clear and obvious and measurable pattern. You're being insistent based on weak (if any) evidence, which I interpret as stubbornness.

{As long as you make no claim that your experimental interpreter is a model of anything, I shall not object. That includes not injecting commentary into discussions based on an assumption that your experimental interpreter is a model, such as using "type tag" in place of "type reference" or claiming that a value is an "anonymous variable", and the like.}

Please elaborate. This seems to assume some canonical model as a reference point. I already explained in bold that it may not reflect actual implementation of most interpreters. Why is this not good enough? Or are you back to making actual machine-oriented implementation the reference point? (You seem to waffle on that.) Otherwise, my arbitrariness is no more arbitrary than your arbitrariness. There is NO ONE RIGHT MODEL. And, I don't formally define "value" anywhere. And "type reference" is not an official certified term; thus, I'm not violating anything real. This again seems to be a case of MentalModelOverExtrapolation into a (perceived) objective truth of model, vocabulary, and usage of. Further, "type reference" implies a pointer to some central look-up table or the like, but there's no such table in the model (as originally given. You can add your own if you really want one.) Thus, it might introduce confusion without sufficient, if any, benefits to make up for that confusion.

{As long as you do not use the term "type tag", "hidden variable" or "anonymous constant" outside of discussions about your model; argue that values or programming language semantics do not exist; or treat terminology or aspects of your model that are unique to your model as if they should be familiar terminology and commonly-recognised aspects, then I shall not object. I recall a couple of instances where you've used "type tag" or just "tag" -- in discussions unrelated to your model -- as if they should be familiar and understood. If I see you do that or its ilk, I will object.}

So you can use working terms like "D2", but I can't? Sounds like a double standard to me.

{I have no intention of using "Category D2" outside of either discussion of your model, or discussion of TypeSystemCategoriesInImperativeLanguages. I wouldn't expect it to be recognised outside of those contexts, so I wouldn't mention it outside of those contexts.}

That's fine. If either one of us slips and uses something outside of context, we can kindly remind each other.


(Diagram moved to IoProfile)


CategoryMetaDiscussion


NovemberThirteen


EditText of this page (last edited January 28, 2014) or FindPage with title or text search