Continued from ValueExistenceProofTwo, which got TooBigToEdit.
(On values being allegedly immutable)
So the bucket itself is not mutable, we both agree (for practical purposes). Right? But the "contents" of the bucket ARE mutable, as we can put in and take out different content to and from the bucket. Agreed? However, rocks are a problematic analogy. In the physical world, you typically have to remove the existing rocks to put in new rocks. It's not practical to smash the rocks into dust and then rebuild new rocks (unless maybe we are recycling pottery). But with electronic flip-flops, we don't have to remove any existing physical "thing" to put in a new/different physical things. It's more equivalent to switching a gate(s) to change the flow of fluid (or flowing electrons). Signals change the direction or "state" of "fluid gates" to alter the flow to a different path (gulley), as needed. Values (content) are NOT solid "things" in the chips such that your rock analogy is a poor fit. There is nothing clearly equivalent to your rocks in chips. Any such mapping requires indirect and contrived fittings or associations or additional layers of pretend or abstract hardware. (I don't believe common notions about languages to be heavily influenced by hardware, but I'm entertaining your claim to see if it provides some insight into the origins and nature of your rather curious and seemingly alien notions about common beliefs among programmers.)
It's a casual analogy, intended to clarify. It's not meant to be a precise "rock and bucket" model of VonNeumannArchitecture. It's better to skip analogies entirely, and recognise simply that slots (memory addresses or variables) contain values, slots are mutable in that a different value can be stored in a slot, but values are immutable. With this simple model, the distinction between values, variables and types can be easily explained and understood. Without this simple model, it's likely that values, variables and types will continue to be unhelpfully misunderstood as an indistinct muddle, resulting in a poor grasp of programming language semantics which leads to poor programming.
I cannot objectively verify that "values are (or must be) immutable".
It's self-evident, and correct by definition. A "mutable value" isn't a value; it's a variable, by definition. It's a variable because it has a time-varying aspect, which is -- inevitably -- a value.
If that's the case, then values can never exist in existing RAM-based hardware because every bit/slot of RAM is potentially mutable. Mutability is relative. My "values" are not mutable to the app code writer, but are mutable to a model fiddler. Same with your "values": they are mutable to a model fiddler, or at least to a RAM hacker.
Certainly values exist. They're what are stored in every RAM address. Changing a bit replaces a value; it doesn't alter the value itself. I know of no architecture where 3 becomes 4; in every architecture, a slot that holds a 3 now can hold a 4 later.
It does not appear to be "replacement" in terms of physical replacement as typically observed during "replacement" of physical objects, as I already explained. No material can be observed to be removed, for example. It doesn't fit the profile of physical observations of replacement in the everyday world. It again appears as if you are mistaking your personal head model for universal or objective truths. Either that, your notion of what "replace" means in English is far different than mine.
Again, beware of taking illustrative analogies too far and trying to treat them as models. Computational machines are unique, such that analogy inevitably fails because there is nothing else precisely like a computational machine. Whilst the notion of "replacement" is a helpful device to aid casual understanding, it is not precise. In order to be precise, it is necessary to dispense with analogy and accept the pure abstraction: That slots (memory addresses or variables) contain values, slots are mutable in that a different value can be stored in a slot, but values are immutable. This simple model makes it possible to understand values, variables and types without any blurring between them, and at the same time allows us to use those concepts to predict and explain all behaviour in (at least popular imperative) programming languages.
Words like "replace" come from the physical/social world of "common" humanity. That's the usual frame of reference for such words, for good or bad. As as far a non-physical abstraction, that's fine as a situational working term, but your version is not universal nor universally required. And, in terms of the tag model, not helpful for illustrative purposes per my judgement of target audience WetWare. Yes, I know you disagree about the target audience WetWare, so there is no need to rekindle that sub-debate for the 13th time (approx.). And I don't "blur between them" as I define "value" in the model.
Further, it was you who suggested we look at "computer architecture" for examples of explicit "values". Now you seem to be back-tracking. It turns out to be not any more concrete at the hardware level then it was on a language or interpreter level. We are just reinventing the same arguments at a different level of architecture.
Of course it's concrete, at all levels. What does a machine register store? What do you push onto a stack? What do you pop from a stack? What is stored in a memory address? In all cases, the answer is "a value".
Whatever a register or memory address stores is not immutable, whatever we call it. If "value" requires immutability, then it's not what memory addresses directly store: the content of a RAM address is mutable. You can envision a pretend or virtual middle-man object called a "value" that's not mutable, fine, but that's merely a UsefulLie at best (and a distraction at worst), but not a universal objective truth. The content is mutable. Perhaps sub-components of content may modeled as immutable, but such is not the only working model.
Think of it another way: At a given point in time, any variable (or register, or memory address) has a particular state. Whilst a variable may take on various states at various times, a particular state is -- by definition -- immutable. "Changing" state means referencing a different state. The usual term for a possible state of a variable (or register, or memory address) is a "value".
Every change and zero changes can be viewed in such terms, if one so chooses. Sometimes it's helpful to model something that "changes state" as being "swapped out" for a different one, or model it as being mutable. Both can often be made to work, but the real question is whether using such a view is "helpful".
Absolutely it's helpful. It clearly distinguishes values from variables, and makes it possible to recognise a type as a set of values. Treating values as mutable (despite being contrary to any meaningful definition of "value") forces a conflation of values and variables, and blurs the otherwise clear distinction between types, values and variables. Treating values as immutable makes values, variables and types trivially distinct, which simplifies understanding of programming languages.
- The tag model does not "conflate" values and variables. It just does not use YOUR definition of "value". Different issue. And defining types as a "set of values" has some conceptual problems in many cases. In dynamic languages, it's best to say that each operator does what it wants to with type-related info and skip defining a global view of any given "type" because there often is no One Right global view in such languages. A "Boolean" for one operator may differ or be inconsistent with "Boolean" for another. Global consistency is nice when it occurs, but is not guaranteed per dynamic language conventions (or lack of). Perhaps we can definite it as a "set of values" within EACH operator, but nobody has found a nice way to present such so far. That's why I do it imperatively in the tag model: developers know that presentation style. See, it's all done for a rational reason. -t
- I suspect you feel "nobody has find [sic] a nice way to present [an explanation of DynamicallyTyped languages] so far" because of your lack of understanding, not because it's unclear (except in the PHP manual, which is notably dire). The fact that literals of various types can be encoded in string literals -- so operators can interpret string operands as they see fit -- is so obvious as to generally not be worth mentioning.
- I already know you believe me to be a stupid dummy; repeating that over and over won't change anything. It's not new information about your assessment of me. My personal observations of how other developers mentally manage dynamic types can be found in MentallyManageDynamicTypes. Yes, I know you disagree. And again, the issue is not that such parsing MAY happen, the issue is knowing and predicting where it actually will happen specifically. ["Found" typo has been fixed.]
- Regarding "knowing and predicting where [parsing] ... will happen specifically", since it only occurs within given operators, how can you "predict" it except by making reference to specific language documentation? There is no general model that will predict that PHP's "is_bool()" will behave one way and "is_numeric()" will behave another.
- I didn't claim there was. And, using your style of "values" won't help that either.
- My descriptions -- including my "style of 'values'" are based on the semantics of popular imperative programming languages, the understanding of which is likely to benefit programmers in general.
- So you claim. Until we have a way to objectively measure "semantics", a fitness computation ("based on X") is so far elusive. My model does NOT contradict typical dynamic language documentation; so if you are somehow measuring "semantics" based on the documentation, then we are both in the same boat. (Some terms are overloaded or vague such that it's possible to have different interpretations without being "wrong".)
- We don't need a way to "objectively measure 'semantics'", and I've no idea what you mean by "a fitness computation". Again, your "different interpretations" are irrelevant. All that's needed is to see what language semantics are intended by reading language documentation. Your assertion that "some terms are overloaded or vague" reflects only on your lack of understanding, not any general misunderstanding about language semantics. The proof is that there are working programming language implementations written by thousands of programmers being successfully used by millions of programmers.
- You have NOT done a formal textual analysis of such to provide solid evidence the written material backs your model/notions and contradicts mine (outside of I/C builder docs). You just make claims out of the blue about those writings. And you admitted English (as usually used) is indeed insufficient in TypesAndAssociations near PageAnchor human_languages_26. Are you back-tracking now?
- Why would we need a "formal textual analysis"? English is indeed insufficient for mathematically-rigorous analysis -- it's the very reason we have DenotationalSemantics and other formalisms -- but we're not doing mathematically rigorous analysis here. If you can't recognise and understand the simple and common semantics of popular imperative programming languages, the problem is yours and yours alone.
- Like I've explained many times (see WeakProgrammersRelyOnBadDocumentation), rough and approximate notions are "good enough" for production work for typical programmers because dynamic languages are rarely used in mission-critical or bug-sensitive applications. However, if somebody wants a more rigorous model, for whatever reason, there are few options unless one is skillful with or familiar with post-graduate academic approaches and writing. Instead of relying on the difficult-to-digest academic approach, I've created a model using imperative techniques that programmers use in their daily work to make it more digestible. Here's a different way to restate the choices:
// example choices_382
IF approximate "type" notions are good enough THEN
use approximate notions.
ELSE IF more rigor or x-ray-ability is desired THEN
IF you find the academic approaches difficult to digest THEN
try top's tag model, it's based on imperative algorithms and XML,
which is something you likely use often at work.
ELSE
dig in the academic materials or fork over 70k for more school.
END IF
END IF
We could model human aging as swapping out each person with a different version of the person every morning, and say that people are not mutable, but rather being replaced by older-styled variations. But, is that thought model actually useful? That's the real question. Yes, we
can view memory addresses in a similar way, but in many cases it serves no purpose or provides no net improvements to a model in terms of the model's intended purpose.
Again, by introducing "human aging" as an analogy, you're trying to stretch analogies to fit where analogies will not fit. Analogies are ultimately unhelpful, because the only appropriate analogy for a computational machine is another computational machine. Computational machines are unique; no non-computational analogy will fully fit.
It was not an analogy, it was an application of the your "state swap" view (for lack of a better term) to demonstrate how the mutability view and the state-swap view are generally interchangeable. The benefits or weakness of each depend on circumstances and WetWare of the model user and neither are absolutely right or absolutely wrong (except maybe if we gained a God View of the guts of the universe). Maybe the universe really is generating a different instance of everything per "click" of time. We can't rule it out. (That idea is not any weirder than some other quantum theory related hypotheses, such as the Many Worlds hypothesis). In the everyday world, we generally only consider state-swap happening if the "prior version" can be seen and typically has to be moved out of the way or discarded/recycled. But in virtual worlds, we don't need this step such that one can use either model interchangeability. To insist that either one is "the only right view" is silly. The benefits of either is more about weighing model description/grokking/implementation tradeoffs, NOT absolute truths. -t
I don't follow your pseudo-philosophical ruminations, which still seem to be attempting analogies. It would help to think solely in terms of computation and computational machines, not things which are neither.
It's the same issue with chips: both the state-swap model and mutation model can fit observations. The actual physical model of such chips/transistors is "flow redirection" as already discussed such that both are merely UsefulLies. We don't want to model electron flow systems at a higher level, and it's not "natural" to most human thinking. Perhaps we should study electro-mechanical computers so we don't have to delve into how transistors work. As I roughly understand them, they generally had the equivalent of a toggle switch for each "bit". A current on the left side of the switch will either switch the toggle "on", or leave it "on" if it's already there, and the current on the right side will either switch the toggle "off" or leave it off if it's already there. To me, this process is clearly NOT "replacing", and therefore does not fit your description/view of immutability. -t
A physical model that matches what you describe would be more like the old-fashioned gas-station price boards: One has a special long stick to grab the old digit card, take it down, and replace it with a new (more relevant) digit card. But that's not how computer hardware typically does it.
It doesn't matter how computer hardware or gas-station price boards do anything. Think solely in terms of computation and computational machines, not electronics or analogies.
I'm not following. Please clarify.
You're still trying to use analogies. Try not using analogies.
I'll try to cut down, but please explain exactly what a "computational machine" is.
A programmable computer that can run programs.
That's what we went over above when we looked at RAM and "replace" etc. There is no physically observable "replace" happening. You seem to be projecting your personal mental models into physical things. The mutate model is a better model of what's actually observable. If you disagree, so be it. Let's LetTheReaderDecide and move on.
There is a physically observable "replace" of values happening in computers, because we can't observe any transition from one value to another. There's no point at which we can observe a memory address partly containing the old 3 and partly containing the new 4. It's either 3, or it's 4.
- Please elaborate. Physical "replace" always involves old material having to be removed. We don't see such. And I don't see how apparent instantaneous-ness backs "replace" over "morph". Both can be viewed as either instant or gradual such that that factor is not a determiner of one over the other. Further, gradual versus instant is usually moot because RAM observers are usually "locked out" during the change-over. Thus, the slot user sees: 1) State A, 2) Lock-out, 3) State B. (I don't know if each bit actually changes at the exact same time and I doubt regular programmers know or care such that it doesn't affect their view of RAM. I doubt it's at the EXACT same time because that's probably impossible if time is truly continuous. It only has to switch within a given range of time: generally inside the period of the lock-out session.)
- I don't know where this "lock-out" comes from. We're not talking about concurrency issues. Even if we are, the slot user sees State A, then the slot user sees State B. There is no intermediate state. There is no gradual change of state. The notion of "old material having to be removed" is a confusion resulting from your adherence to analogies; a machine state isn't "material" to be removed.
- There is an intermediate state. The user is just not allowed to see it via a lock-out mechanism, such as timing techniques. The "lock-out" is a real feature to prevent so-called "dirty reads". Perhaps "lock-out" is the wrong term/phrase, but lock-while-write is a real feature of memory and disk design. Still, instantaneous-ness is not a distinguishing factor between the two views anyhow. And the removal stage is HOW we know "replacement" is happening. It's a symptom of replacement, NOT an analogy. Without such symptoms, we can't tell if "replace" is happening. We shouldn't just take your word for it. If "replace" has any useful meaning, then characteristics of replacement need to exist. Otherwise, it's either the wrong term OR indistinguishable from alternative descriptions/interpretations. See PageAnchor English-Universe.
- If the user can't see an intermediate state, then it isn't there from the user's point of view.
- That's poor reasoning. AbsenceOfEvidenceIsNotEvidenceOfAbsence. If they can't observe the state change happen, they should NOT make assumptions about it when making conclusions intended to be universally true. You should not conclude it's instantaneous or not instantaneous of you cannot see it. It's perfectly okay to make a working assumption for certain kinds of models (UsefulLie), but one should make those working assumptions clear. If I'm not allowed to see what happens during the "lock out" period, I am not going to make any universal conclusions (guesses) about it. That would be irrational behavior.
- No, that's reasonable technical reasoning. I'm not making any claims about universal truth, only about the truth observed by a user, which in this case is the only relevant truth.
- No, sorry, it's not reasonable. You cannot conclude it's "instantaneous" just because the transition is not observable. The answer to the "instantaneous" question is "unknown" or "null" from the chip user's perspective.
- I never claimed anything was "instantaneous". What made you think I did? From a user's point of view, values are immutable and slots are mutable. That, and that alone, is observable.
- I won't revisit where that perception came from as long is it doesn't come up again. And to most users, "slots" (and similar terms) and "values" are quite often used interchangeably in colloquial descriptions of such, creating a contradiction in your statement. Plus, "value" is often used in a non-mutable way in general, indicating immutability is NOT an idea strongly cemented with the term in people's heads. But I live with such contradiction and inconsistency because English as usually used for such things is ambiguous and overloaded. I don't pretend it's rigorous because I know better.
- [Can you give a verifiable example of someone using "slots" (and similar terms) and "values" interchangeably? I've never heard anyone do so. (Not that it would make the rest of what you said true. Changing definitions in mid argument, like using a colloquial definition to contradict a technical definition, is called equivocation. It's a fallacy.)]
- I already did. In spreadsheets and CRUD forms, people and developers often talk about "changing the value" in a cell or field. Also, programming spot example: http://stackoverflow.com/questions/18561836/change-the-value-of-button-onclick
- How is this an example of "using 'slots' (and similar terms) and 'values' interchangeably"? In each case, the slot (or equivalent) and value are clearly distinguished.
- How exactly are they "clearly distinguished"? THEY say, "I'm changing the value".
- Sure, that's shorthand for "I'm changing the value from one to another in a slot." How is this an example of "using 'slots' (and similar terms) and 'values' interchangeably"?
- No, that's not what it's shorthand for. You again appear to be heavily injecting your personal interpretation into statements not actually made nor remotely even implied. When you read English from tech docs, it honestly appears to me you hallucinate stuff that's not in the actual words.
- What do you think it's shorthand for? Even if it's shorthand for "I'm editing the value" or some such, how is it an example of "using 'slots' (and similar terms) and 'values' interchangeably"? Do you really think even spreadsheet users confuse spreadsheet cells with what's in spreadsheet cells? Do you really think programmers confuse variables with what variables contain?
- "Change the content of cell" probably means "change the content of (inside) a cell". Yes, it's probably thought of as "morphing", for lack of a better word. But I will admit I have not x-rayed user minds and only go only indirect clues. But such evidence is all either of us has.
- "Change the content of (inside) a cell" sounds like the same thing as "I'm changing the value from one to another in a cell", but regardless how it's interpreted, I still don't see any evidence of "using 'slots' (and similar terms) and 'values' interchangeably".
- "Sounds like"? They are not equivalent to me. Very different. The second phrases comes across as very unnatural and uncommon. Same goes for "inside". And "slot" and "cell" are generally interchangeable as far as I'm concerned. Needless to say, we both appear to interpret English very differently. I feel my way is "right" and common, but there is probably no use in arguing about it without formal studies. Let's LetTheReaderDecide. I'm confident most would agree with me and would bet money on it if I could.
- How does "Change the content of (inside) a cell" differ from "I'm changing the value from one to another in a cell"? Indeed, "slot" and "cell" are generally interchangeable. "Content" and "value" are generally interchangeable, too. What we're discussing here is "using 'slots' (and similar terms [like cell]) and 'values' interchangeably", which still appears not to be the case.
- They differ in that the first does not assume replacement. It could be morphing. It's not giving an implementation.
- Who said anything about implementation? Do you not think everyone would agree that the result of "change the content of (inside) a cell" or "changing the value from one to another in a cell" is a different value in a cell? Would anyone say, after having edited a cell, that cell contains what the cell contained before except for what the cell contains now?
- If we are not measuring implementation, then we are modelling abstract models. If abstract model A works just as well as abstract model B at explaining the OBSERVABLE stuff, then one should NOT say that model A or model B is the canonical model. And I don't know for sure what people would agree with, but my best estimate is that you'd get a roughly even split because morph-leaners and swap-leaners, probably with more morph-leaners.
- You really think there are those who would would say the cell now contains an altered old value, rather than a new value? Of course, it doesn't matter one way or the other -- we're not talking about user perception of spreadsheets, we're talking about the most effective means for modelling programming languages. In that case, if we're considering how "abstract model A works just as well as abstract model B at explaining the OBSERVABLE stuff", then immutable values represent a simpler, clearer model than mutable values.
- So we are back to "model design" issues rather than "requirements"? Okay. However, we've already been over "simpler, clearer" in terms of grokkability, and it ends up we both have different working assumptions about the WetWare of "typical developers" in terms of grokking, based on our experience; AnecdoteImpasse. I see no reason to revisit that yet again repeatedly, barring new WetWare evidence.
- It sounds like you're making assumptions about the "WetWare of 'typical developers'" based on your own "WetWare", rather than that of typical developers. I find it notable that you never mention how your colleagues have reacted to your model.
- We've been over this already. I have described type tags (type indicators) before associated with variables, and I didn't hear any complaints. However, I have not done any wide surveys, but neither have you.
- You didn't hear any complaints? You mean there were no comments at all, but at least the silence didn't give way to complaints? That hardly sounds like glowing support. Or any support, for that matter.
- The body language was kind of, "Okay, that explanation seems to work, for now at least". I wouldn't expect it to take a strong hold immediately. It's a model you have to ponder and apply for a bit to see how it rolls.
- So your target audience didn't embrace it wholeheartedly when you were there to promote it? What, then, makes you think they'll embrace it at all in your absence?
- It's not the kind of thing I'd expect to "click" instantly. It's something one has to roll around in their head a while and test against sufficient set of encountered field scenarios. One cannot do all that in one sitting. It didn't "click" instantly with me either. First, I just relied on vague notions and pattern learning, but eventually wanted a more concrete model(s), and so kicked around a few draft candidates for a while, selecting the tag approach as the best based on experience trying them all (in the head and on paper).
- We've been debating this topic for months. How long do you expect it to take to "click"?
- Who are you talking about? You? You are not a representative sample of humanity.
- Anyone. By now, if your "tag model" was going to gain traction, I would have expected someone other than you to have spoken up in its favour.
- When OO was the "in thing" and I questioned the hype against the OO tide, I used to get private emails by people thanking me for saying stuff they were afraid to say themselves. I've grown a mostly thick skin over the years of debating, but not everybody is ready for the RudeWeb?. Why publicly side with an unpopular guy and receive insults for such? Socrates would probably be hated in today's world also: going against popular or authoritative notions is a rough business. (I'm not comparing myself to Socrates necessarily, only saying that popularity, openness, and accuracy have a complex relationship.) --top
- What does that have to do with my point? Are you getting private emails from people thanking you for your tag model?
- I don't publish my email address anymore. To reiterate my point, people tend to avoid public association with or defense of controversial identities even if they may agree with them privately. That's what my experience with OO extremists taught me.
- Re: "using a colloquial definition to contradict a technical definition" -- If we are talking about "common usage" of terms, then colloquial matters. The accusation was that my model violates commonly-used terminology. Plus, I'm not sure there are official/technical/canonical definitions of "value" etc. such that it would be hard to commit such a sin even if I wanted to.
- Relying on colloquial terms is bound for confusion. Note, for example, that "infer" and "imply" are colloquially (and incorrectly) often used interchangeably. Does that mean confusion is reduced by using them incorrectly? More likely, using them incorrectly perpetuates confusion. It should be self-evident that distinguishing mutable variables from immutable values is clearer and less likely to cause confusion than not distinguishing them, regardless what colloquial use (or misuse) might abound.
- You claimed my model violates "common semantics", and common semantics can be looked at by analyzing colloquial language. If you are talking about formal definitions (if such fuckers exist), then you are NOT talking about common semantics. And confusion may indeed occupy the everyday field.
- If confusion occupies the everyday field, then a good model in that field should reduce confusion, not perpetuate it. As for "common semantics", they're self-evident in popular imperative programming languages. What popular imperative dynamically-typed programming language has a "while" loop, "for" loop, variable assignment or expression evaluation that differs semantically, in any material way, from the other popular imperative programming languages?
- Re: "then a good model in that field should reduce confusion" -- too bad nobody has one yet. And we are not talking about loops here.
- If we're talking about the relationship between types, values and variables, from a "model" point of view it's so trivial that it's pretty much self-evident. It's what's described at TypeSystemCategoriesInImperativeLanguages.
- I've stated many times that I find TypeSystemCategoriesInImperativeLanguages to be poor writing. I won't rekindle that debate here.
- Yes, you have stated that many times. You've never indicated why you think it's poor writing, though.
- Yes I did. We've been over it like 4 times already. I'm not gonna 5 it here. Fuck that dance. Repetition is getting repetitious.
- That's an unhelpful (and rude) response. I suggest creating a new page to host your criticisms. Then, when someone claims that you have failed to articulate your criticisms, you can simply write, "Yes, I have. It's at TypeSystemCategoriesInImperativeLanguagesCritique?."
- Maybe later I'll repeat #5 here. Not today. Repetition can also be unhelpful. (An eventual general summary of criticisms of both models would be nice.)
- And, I've granted that your replace view is a valid model, but not the ONLY model that fits observations about RAM. (Nor is it the best, per fitting of physical symptoms, per above.)
- It's the best fitting model in terms of reflecting mathematical reality -- in which values are unquestionably immutable -- and in terms of clarifying models of programming languages. Having variables that are variable, and values that are variable, is confusing.
- You do realize "mathematical reality" is an oxymoron. If my model clearly violates some principle of math, please provide the mathematical proof. As far as "clarifying models", your idea of "clarify" is highly different than mine, as we both should know by now, per "WetWare battles". Re: "Having variables that are variable, and values that are variable, is confusing." I agree! Fortunately, my model doesn't do that.
- "Mathematical reality" is a perfectly reasonable phrase, e.g., "2 + 2 = 4" is a mathematical reality. By making (say) the result of expression evaluation mutable, given that the result of evaluating an expression is a value, it would appear you are courting a violation of mathematical reality.
- If somebody gets inside the guts of any interpreter, the "results of expression evaluation are mutable". As a reminder, the source code (input) cannot do such a thing. There is no source code that can be fed into the tag emulator/model that can change the intermediate results in the way you keep fearing (unless somebody adds a backdoor). As we discussed before, a production interpreter would probably want to use stricter designs/classes to reduce the chance of "internal" changes to such, but this is a "toy" (learning) model that intentionally allows fiddling with the interpreter/emulator guts. But even with stricter designs/classes, RAM is always mutable via hacking etc. such that "immutable" is relative. Most people who know the basics of computer hardware would agree that "the content of RAM are mutable". That's the way it is. A theoretical 100% immutable value is purely an abstract idea (that may or may not be useful for modelling etc.) that does not objectively exist in the absolute sense, at least not without special hardware. It's more practical to talk about immutability from the perspective of something, such as the perspective of the app programmer (input source code), from the interpreter model, from a hacker, from a chip designer, etc. -t
- Anyhow, I thought we were talking about "common notions" (or common semantics), not abstract theoretical objects such as fully immutable values. It is NOT a common notion that "values are immutable", at least when values are used interchangeably with "content", which they frequently are. If you claim immutability is a common notion of "value", then please cite a sufficient number of sources, and include those that don't have such evidence. In other words, if you survey texts on the subject to gain evidence, you cannot report back JUST on those that match your claim; you have to report all that were examined because "failures" also convey info, and that's proper statistics, being that "common" is the issue at hand. If you don't wish to do such footwork, then we'll leave this as yet another AnecdoteImpasse. -t
And the gas station analogy was meant to show what your version of "value replace" would actually look like. It's NOT intended as an analogy, but a
proof of concept for your viewpoint (as the word "replace" is typically used in the real world). Such could be implemented/automated in computational hardware. I provided it to contrast with existing hardware and to demonstrate how most would interpret your wording in terms of hardware implementation. In fact,
punched cards were like that. They were rarely directly changed, but rather the old one discarded and a new copy made with the changes in place. -t
Hardware implementation is irrelevant. This still appears to be an attempt to appeal to analogy, where analogies are ultimately inappropriate.
It's NOT an analogy, as explained. I thought you said hardware was relevant; you appear to flip-flopping again. Confusion abounds. If we are talking only about abstract (virtual) views, then abstract view A is not more relevant than abstract view B as long as they both produce the same results (IoProfile). Your implication that started this hardware discussion is/was/? that "common notions" are shaped by computing machinery. -t
Ironic that you are flip-flopping about RAM flip-flops. -t
I suggested looking at computer architecture to recognise the basis of immutable values and mutable variables, but you appear to have misunderstood the relevance.
Apparently so. You are not describing your position well. Observation of RAM does not provide clear evidence of "immutable values". I don't know where this clear evidence is.
Understanding VonNeumannArchitecture and the simple mathematics upon which it is based does, however, provide clear evidence of immutable values. Note that there is no arithmetic that mutates a 3 into a 4. We do not, for example, define simple numerical addition as mutating any operands of the '+' operator; instead, we define it as producing a new value as a result.
- We looked at VonNeumannArchitecture, and you are wrong, or at least using one possible model among multiple that fit observations. Physical observations do NOT back your immutability model as the only fit. I'd argue current computer architecture intentionally blocks the stage where we could otherwise answer the immutability question as part of it's fundamental design philosophy to avoid "dirty reads". This applies to disks also, not just RAM: disks typically don't read and write the same sector at the same time. And we are talking about computers, not arithmetic.
- Physical observations from a user's point of view (which is all that matters, here) unquestionably support a view of mutable slots and immutable values. I'm not sure why you think they don't.
- Sorry, you have not given decent and clear evidence for such. We've been over that, and there's no hints of anything equivalent to physical swapping as seen in the everyday world, such as replacement debris. You'd have to demonstrate debris at least, to convince me. I will only agree that "replacement" is one of multiple matching ways to model it.
- We've been over this notion of "replacement debris", and I pointed out that "replace" is commonly used in ComputerScience with no "replacement debris". See http://www.w3schools.com/jsref/jsref_replace.asp or http://uk1.php.net/str_replace or http://docs.oracle.com/javase/7/docs/api/java/lang/String.html and so on. However, regardless what word is used to describe mutation of a variable, it is mutation of a variable (by definition!) not mutation of a value. Note that I wrote, "physical observations from a user's point of view (which is all that matters, here) unquestionably support a view of mutable slots and immutable values." No mention of "replace".
- I also pointed out that many systems use "substitute" instead of "replace" for such. But I'm not sure string processing is representative of the issue here. I'm trying to explain that actual observations do not point to replacements EXCLUSIVELY. I agree that "replacement" is one interpretation/model that "works" in terms of matching observation, but is NOT a unique interpretation/model that works. There is no observation that EXCLUSIVELY makes or demonstrates "immutable values". If you find such an EXCLUSIVE observation, please carefully describe it. In fact, in physical materials, there probably is a transition stage where a "'3' morphs into a '4'" in a gradual fashion (per small time scales) to borrow your example. The only place that comes to mind where your model of values existed physically in computers is punched cards: the old card was tossed or archived, and a new one created with the record changes on it. But I do NOT believe that kind of thing is a common head metaphor used by most developers when thinking about changing content in spreadsheet cells, table cells (row and column), form fields, or variables. You'll probably disagree, but it's merely yet another AnecdoteImpasse about developer WetWare. -t
- Whether it's called "substitute" or "replace" or something else, from a user's point of view there is never an observable mutation of values.
- That is indeed true. However, there is no observable "replace" either. Neither model is confirmed as the one-and-only explanation.
- If there's never an observable mutation of values, then values are immutable. What we call it when a slot holds one value now, and another value later, matters not. The important thing is that slots are mutable -- which is certainly observable -- but the values that slots hold are immutable.
- Again again, AbsenceOfEvidenceIsNotEvidenceOfAbsence. Your logic is flawed. One CAN observe that a change has happened, but cannot witness the actual change, and thus cannot make any statements about the mechanism. (The actual physics support morphing, not swapping.) If one observes a chair in a room, closes the door for a short while, then opens the door, and sees a chair that appears different from the firstly observed chair, one cannot say with certainty whether it's the same chair morphed somehow, or a different (swapped) chair. All one can confirm is that it has different characteristics. You are over-speculating about the mechanism.
- You're mis-applying AbsenceOfEvidenceIsNotEvidenceOfAbsence. We're not talking about a possibility of value morphing that would be observable if we just looked long enough, we're talking about a supposed mutation of a value that is simply never observable.
- Neither "explanation" is directly observable. Thus, why favor or canonize one over the other?
- Mutation of a value is never directly observable, but changing the value in a slot is certainly directly observable. Imperative programming languages depend on it, in fact.
- Can you clearly demonstrate "directly observable"? And I don't mean your personal interpretation of what may be happening: I mean direct evidence of "replacing"? Note that we can observe that it "has changed", but we cannot observe the moment of change. That's why I use the word "replace". The chip user can NOT directly observe the "value" (content) at the very point of transition. Observing that it "has changed" does not provide evidence to distinguish between the two models.
- Sure. In Javascript: "a=3; b=3; alert(a==b); b=2; alert(a==b);" We should note that the first alert displays 'true', the second displays 'false'. Thus, we note that changing the value in a slot is directly observable. Now try "3=2;" It fails because we cannot mutate a 3 into a 2. Furthermore, there is no construct we can conceive that would allow us to observe a 3 mutating into a 2.
- Like I said above, observing that something "has changed" does not give us enough info to select between the two competing explanations. You are not providing new information here. You have not proved that the content of "b" has not mutated. (No language experiments will differentiate between the two explanations, I'm pretty confident.) And "3=2" is invalid syntax. Such a test tells us nothing to help select. The models and interpreters shouldn't run "3=2". It's meaningless to interpreters and humans (unless they invent a personal meaning for it in their head). Might as well be "glap#&@^@*".
- Why is "3=2;" invalid syntax? Why do you say, "models and interpreters shouldn't run '3=2'"? It would be trivial for language designers and implementers to allow it. Why don't they?
- I can trivially write a script that prints 10,000 pages with the words "foo" on them. But just because it's easy to implement doesn't mean it's useful or wanted. Second, there are at least two ways to interpret that statement. The first is that variable names can start with with digits, and "3" is a variable name. The second, is that all occurrences of the literal, value, and/or constant "3" will be treated like or replaced by a "2".
- There are a variety of possible ways to interpret "3=2;" What are they, and why don't we allow any of them?
- I don't know why anymore than I can explain why BrainFsck is not more popular.
- I'll give you a hint: One of the reasons is that values aren't mutable.
- Or, nobody has found a practical utility of such a statement construct.
- Indeed, there is no practical utility to having mutable values.
- Within a debugger or similar tool, perhaps there is. But those things you are complaining about are not "values" anyhow.
- A debugger or similar tool is not at a programming language level. At a programming language level, values are not mutable. What makes you think "those things [I am] complaining about are not 'values'?"
- Again again again, the "mutability" is NOOOOOOT at the language side of my model. The "input" language app ITSELF cannot change the internal variable content, PERIOD. Geez, how many times do I have to repeat this? And whether they are "values" or not is an ugly definition debate with no likely ending. LaynesLaw. Until an official rigorous and clear definition comes along, it's a waste of time to obsess on the definition of value. Those who do probably require medication. (And I question my sanity for sticking with this stupid "value" debate. I probably need meds also. I'm just protecting the tag model from fastidious zealots who use fake rigor to attack it.) The debugger comparison stands as a pretty good comparison.
- Even if mutability is precluded by some name-based mechanism, you are still conflating two distinct concepts: variables and expression evaluation. The former is mutable and named. The latter is immutable (via your name-based mechanism) and anonymous. Thus, they have no structural commonality. So why make them a common structure?
- It's an internal concept to the model, so it can be whatever it wants. It is not obligated to match anything external. You seem to be making up rules again. Bad habit. It's like complaining about the kind of gears used in an orrery.
PageAnchor English-Universe
You seem to interpret English different than most other people. "Replace" means something very different to YOU than to me. I don't know how to remedy that. It's like talking to an alien from another universe with different physics. In MY universe, "replace" almost always leaves detectable remnants/waste such that I expect detectable remnants if "replace" is happening. If there are no remnants, it's indistinguishable from a "message" to change state, such that I would call such a "message" in my universe. In your universe, "replace" ODDLY has no such side-effect. If replace means something completely different in cyber-space than the physical world, then that's news to me and I need a cyber-space dictionary that's been cleansed of the everyday physics and processes that originally shaped English. Such a dictionary would give clear cyberland definitions of common words like, "has", "is", "contains", "change", "replace", etc. (Granted, a "message" can leave behind debris in the physical world, but unlike "replace", the debris resembles the message device, not the "fixed" item containing the content. If the message and the content item are of the same construction, then "change" versus "replace" are indistinguishable anyhow, barring some marking technique, making them interchangeable viewpoints.)
Your difficulty with language reminds me of bookkeeping students who get completely hung up on what "debit" and "credit" mean, to the point of being unable to grasp simple concepts because they persist (and perhaps insist) on tripping over what the word implies to them, rather than what it actually means. Note that the term "replace" is not essential; it's more important to recognise that variables are mutable and values are immutable. Whether we call it "replace" or "change" or <whatever> matters not, as long as we understand what it means.
MY difficulty with language? I propose YOU have the difficulty in that you mistake your personal interpretation as a universal truth. And my "values" are not more or less immutable than yours. I'm not entirely sure what your complaint is; you seem to flip flop. Per naming actually used in my model, my "values" don't demonstratively violate such. You appear to be quibbling about some extremely minor and obtuse philosophical nit. As far as your accounting anecdote, I don't know those students or their issues and so cannot comment. Generally as long as one can guess how future accounting coworkers and auditors are likely to label something, that's good enough to get work done. It doesn't matter if they are "wrong" as long as you can predict their classification and match it. It's a form of IoProfile per matching the "output" of people and auditors instead of interpreters. You only have to match them, not philosophically agree with them to get work done and paid. One of the jobs of legal experts is to predict how a judge or jury may interpret a given law. Whether the judge or jury is "right" is moot: one is paid to predict them, not "fix" them. Accountants have similar responsibilities when classifying transactions. If they have a disagreement with how something "should" be classified, they can take it to an accounting association or a professional discussion forum. But don't gum up your current employer's operations by forcing your classification into the existing system/environment when it will create a likely flag (mismatch) with auditors etc.
You appear to have misunderstood my example. The problem with the accounting students who get hung up on what "debit" and "credit" mean is that it inhibits their understanding of bookkeeping and accounting. All that "debit" and "credit" really mean are the left and right hand columns, respectively, of a bookkeeping ledger. Those who try to infer greater meaning to "debit" and "credit" -- such as whether money is being added or subtracted or loaned or deposited or going in or out or whatever -- inevitably wind up confused. Similarly, getting hung up on whether "replace" or "change" or <whatever> is the best term to represent what happens to variables will inhibit understanding rather than helping it.
I'm just trying to analyze your claims, as given in English by you. Careful dissection of the meanings of YOUR words is the only way I know how to analyze your claims. That's all I have to go on with your claims: your words. It's your English glowing there on my screen, nothing else. If terms like "replace" mean something different in "computer science" (cough), then please give the sources and glossaries or usage statistics that back your interpretation of such words. Otherwise, I'll analyze such words based on "everyday life". -t
Rest of reply moved to LearnPatternsIfLogicNotFound
Look at str_replace() in PHP or string.replace() in Python. In both cases, a given substring is replaced with another. Notably, neither "leaves detectable remnants/waste" (your words) of the string that was to be replaced. Such use of "replace", in ComputerScience and SoftwareEngineering, is commonplace.
True, but the word choice may be because nobody found a better name, or it's good enough. Many systems call it "substitute", which is more technically accurate in my opinion, but longer. Further, that usage does NOT forbid the existence of debris, for the debris could be dumped in an internal garbage can (GC?) without the function user having to be concerned with who empties it. English usage is not rigorous or consistent when applied to the technical world. And this allows two or more approaches to be "right" even though it may create a potential inconsistency. As I mentioned before, "change the value" is frequent in form and spreadsheet discussions, implying values are mutable. For these reasons, one shouldn't put too much stock in English, especially in arguing that there is OneTruePath. Because you are arguing for OneTruePath in type modelling, you have a higher burden of evidence than I do. I only claim my model does not contradict a significant portion of common notions/usage, I don't claim it matches the OneTruePath of type models or the OneTruePath of English or that there even is OneTruePath of type models (if we exclude implementation, which is geared for machines, not human grokking.)
I'm not arguing for OneTruePath, only against your model which is flawed.
No, it's not. You have not shown that it violates any clear-cut rule other than rules you made up in year head, but mistake for external universal truths. RAM is not immutable by most accounts, thus no "value" (cell content) in any fucking interpreter is 100% "immutable" in an objective and measurable way using physical tools and photons. QED. The RAM model you use in your head is an arbitrary model with arbitrary rules (or at least unnecessary rules).
I have made no claims of "external universal truths", only that your model is flawed because it conflates immutable values and mutable variables and explicitly associates types with variables. Your expressions evaluate to a mutable thing -- that is both practically wrong (why would we mutate the result of an expression?), and theoretically wrong per mathematics (values are immutable by definition). Explicitly associating types with variables is wrong per conventional descriptions of dynamically-typed languages.
"Immutability" is relative, like I already described. Anything in RAM is potentially mutable. I only need to things to be immutable per matching the reference interpreter's level regarding immutability to be "correct". The intermediate internal parts are NOT changeable via the input source code; and thus I satisfy the requirements. Any fussing outside of that is bitching over personal internal design preference. SafetyGoldPlating the internal parts from internal gut fiddlers is a PersonalChoiceElevatedToMoralImperative. As a training system, I want the guts to be fiddle-able. LearningByBreakingStuff? is a valid way to learn. Toddlers do it all the time. Ideally they would be photographic-minded geniuses like little Sheldons and we could just plop them in the library and they'd absorb everything without breaking anything. But most humans are not like that.
Re: "values are immutable by definition" - In that case, then common speech and RAM physics kicks the definition in the nuts. But you over-apply it in that mutability is generally considered relative to something. Further, the things labelled "values" in my model are not any more or less immutable than your "value", based on the info you've given. If you stated in your model, "they must be implemented using a write-only structure (create only)", then I might give your model credit for making them immutable from the interpreter source-side side, but that STILL does not prove "common usage/semantics". That would only be a feature specific to your model.
What you happen to call "values" in your model is irrelevant. What are the values in your model (again, whatever you call them matters not) -- the constructs returned by function invocations and which are the result of expression evaluation -- are mutable. That is mathematically incorrect.
- I believe you to be full of dripping brown squishy stuff. However, for the sake of argument, suppose it was "mathematically incorrect". What is the practical downside of this alleged violation?
- Let me guess: you are going to say the practical downside is that it violates "commonly understood semantics". But you have not given strong evidence of the nature of "commonly understood semantics". You just claim the same claim over and over.
- No, the practical downside is you're creating a model where the constructs returned by function invocations and which are the result of expression evaluation are mutable. That invites error.
- Show me an example of it inviting this error thing you speak of.
- Your conflation of values and variables is the error.
- Sigh. Back to square one. My model does NOT conflate them.
- Of course it does. You use the same structure to represent variables as you use to represent the result of evaluating expressions. That is a conflation of variables and values.
- I didn't call it a "value", you did. Don't call it a "value" if it bothers you personally. Think of it as a flizzlemoof. (Reminder: "value" is not formally defined anywhere in a universal sense.)
- It doesn't matter what it's called, and even if we ignore "value", using the same structure to represent variables as you use to represent the result of evaluating expressions is a conflation of expression results with variables. In popular imperative programming languages, the result of an expression is never mutable. A variable (by definition) is always mutable.
- They are never mutable to the language user. The guts themselves can be mutable. The guts can be mutable in debuggers also and nobody complains about such a feature except you because it violates some odd personal obsession of yours. I didn't give a fuck about your peculiar personal obsessions. They are stupid and silly and so very irritating.
- If they are never mutable to the language user, then they are never mutable. Why model something that is not observable?
- PageAnchor Gut-Fiddler-24
- The guts are always mutable if you dig deep enough from the perspective of a "technician" who can get into the guts. And there may be utility in such, using the debugger analogy: sometimes one wants to fiddle with contents of stuff. It's perfectly acceptable to be able to change the run-time "value" of a constant in a debugger, for example, even though it "violates" the "true" meaning of "constant". But it's still "constant" from the language user, and that's all that really matters in terms of "matching the language". Again, languages are not defined by their debuggers. Similarly, languages are not defined by what technicians can tweak with inside the model/interpreter. -t
- We're not interested in "the guts" -- which I take to be the machine language or even hardware level -- if we're talking about popular imperative programming languages. We're interested in language behaviour in terms of language constructs, not how the language constructs are implemented. At the language level, as you said, values are never mutable to the language user. If you want them to be mutable at the machine level, that's always possible if you debug deeply enough, but why incorporate it into a model of language behaviour?
- Because 1) it simplifies the model (fewer parts), and 2) it makes it easier to "fiddle" with the guts, such as altering constants mid-stream. The down-side is less TypeSafety or similar protections inside the model itself.
- Why do you think a lower quantity (in this case, one) of complex parts is simpler than a higher quantity (in this case, two) of simple parts?
- Well, it's a WetWare call. But objectively, it is less code. I see very little reason to have a nested relationship for 1-to-1 associations between structures. But again, this is similar to the "thin table" debates about partitioning tables and I don't think this is a good place to reinvent those fights.
- There's nothing that says it has to be nested per se. That appears to be an inclination pushed by your use of XML as a notation. There's nothing that says a variable couldn't be equally well described as (say) "Variable = Name --> Value", and Value as "Value = Representation --o Type". Or whatever. As long as the appropriate associations are shown, the notation doesn't matter. It doesn't matter whether they're shown with nesting or arrows or some other symbols, as long as the relationships are clear.
- Good, then why fuss about a particular representation of a given model?
- Because your model has awkward relationships. Your model suggests "Variable = Name --> Value, Type" and "Value = Representation". It either forces you to use variables for things that are not variable, or forces you to deal with representations that have no type. The latter is a show-stopper.
- Like I keep saying, they are not variable to the language user (app source code). They are only "variable" to a "guts fiddler", and ANY value/content in RAM is "variable" at a low enough level in ANY interpreter/model. Immutability is relative. My immutability level is appropriate for the intended use of the tool. You are over-extending an immutability "rule" beyond the applicable scope.
- Immutability and mutability are at precisely the appropriate scope -- the language and its elements. How those are implemented, or what is implemented using them, is irrelevant. As such, (im)mutability are not relative. They are absolute within the context of a given language.
- The "language" as "run" by the model fits all the necessary rules of Immutability and mutability. And if implementation is irrelevant, why are you bitching about the mutability of implementation parts? You are contradicting yourself. (ALL implementation is mutable if you dig deep enough into the guts such that all runnable models fail an absolute test.)
- The model you've defined does not fit all the "necessary rules of [i]mmutability and mutability", as it uses the same mutable structure for both mutable and (supposedly) immutable structures. What in your model turns mutable structures into immutable structures?
- That's only INSIDE the model. You said "How those are implemented...is irrelevant". Every model known has that same "flaw" at SOME level in the implementation.
- Again, what in your model turns mutable structures into immutable structures? Of course they can all be mutable internally, but by using the same structure to handle both mutable and immutable constructs, you make them both mutable and immutable externally.
- No! As I have explained many times, the naming convention of internal variables makes them inaccessible to "external" source code. There is no way "regular" source code can change the content of those internal variables because they are invalid names to external code. And again, I did NOT invent this approach, I "stole" it from an old system. (A fully built model would generate these internal names automatically.)
- What part of your model is the "naming convention", and how does a "naming convention" prevent conflation of variables and values?
- See PageAnchor internal-variable-naming in TopsTagModelTwo.
Re: "Explicitly associating types with variables is wrong per conventional descriptions of dynamically-typed languages." -- Prove it using statistical sampling of written descriptions of languages. (One language does not prove it. There are different ways to build/model variables.)
See http://perldoc.perl.org/perlintro.html#Perl-variable-types -- "Perl has three main variable types: scalars, arrays, and hashes." -- given we've been talking primarily about scalars, and given that "scalar" is essentially a catch-all for any scalar type, it implies (and the examples illustrate) that any scalar value can be assigned to any Perl scalar variable at any time. Hence, it is (within its scalar, array or hash domain) untyped.
- It "implies"? Implying is weak evidence. People often fill in "implies" with their preconceived notions. And "type" is often synonymous with "category". Being under one category does not preclude a thing also being under another category. Further, Perl scalars don't have a (detectable) type tag, and so some say its scalars are "untyped", which is a phrasing I don't necessary disagree with. (To model such in the tag model, one removes the type tag altogether.)
- In Perl, we know any scalar value can be assigned to any Perl scalar variable at any time. Hence, no type-checking on assignment, therefore Perl scalar variables do not have types.
- Your logic is off. But in my model, variables don't have types anyhow for Perl.
See http://en.wikibooks.org/wiki/Python_Programming/Data_types -- "It is important to understand that variables in Python are really just references to objects in memory. If you assign an object to a variable [...] all you really do is make this variable [...] point to the object [...] which is kept somewhere in memory, as a convenient way of accessing it." -- i.e., a variable is a named pointer; no indication of a 'type' property associated with it.
- We've covered Python already somewhere. It's a "pure" OOP language such that everything is an object. (And the tag is probably in the object.) But that does not make its variable design a universal model. Different languages may indeed need different models (if we want to say capture its OO-ness properly). No news there. (Note that if we don't care to model the OO aspects and just study "simple" scalars, we don't necessarily have to introduce the OO modeling aspect if no tests reveal it's necessary. Just because you want to model the position of the planets in the sky doesn't mean you also have to model their brightness to viewers.)
- If "the tag is probably in the object" (whatever that means) then it isn't in the variable. Also, in Python we know any value can be assigned to any Python variable at any time. Hence, no type-checking on assignment, therefore Python variables do not have types.
- I agree in Python's case, at least in terms if the first sentence. But perhaps it never matters for scalars such that my structure may still work in terms of matching results with scalars. If we are not interested in modeling objects, then why model them unless it impacts what we do want to model?
- Your structure has a redundant attribute, the "variable type", that will never be used. Why model it unless it impacts what you want to model?
- Flat False. It clearly changes the behavior if removed for tagged languages. You are just bitching because you personally don't like the structure used in the model. You have not identified a clear and objective flaw, just round-about esoteric and evasive mumbo jumbo.
- The clear and objective flaw is that you include an attribute in your variable that is not reflected in the semantics of variables in dynamically-typed languages. In dynamically-typed languages, there are only ever two operations associated with variables: Store a value, and retrieve a value. That being so, what is the "type" attribute used for?
- You are making up allegedly formal rules about variables out of the blue. Knock it off. Bad habit.
- I'm not making up anything. I'm simply observing what happens in popular dynamically-typed imperative programming languages. I've never seen anything done with variables in Perl, Python, PHP, Javascript or Ruby other than "store a value" and "retrieve a value".
- What about "getType" kind of functions? Both our models generally associate one way or another two "attributes" with each variable: the "type indicator" and "representation" (naming decisions aside). The packaging/nesting and "accessors" for those two attributes may vary, but that's just implementation/model specifics. I don't see clear and consistent indicators in most writing that dictates such packaging. There are simply multiple ways to skin the packaging cat.
- The "getType" functions retrieve a value from a variable, and then return the value's type. Note that you can pass any expression to getType() in PHP. Regarding packaging, since a type reference is always paired with a value's representation (though the pairing may be implicit if the type reference is always the same, e.g., "string"; otherwise, the representation is meaningless), why would you not always package the representation with its type reference?
- I *do* package the representation ("value") with the type reference. They are in the same XML tag. (And they are not always paired, by the way.)
- Yes, but you use that same XML tag to represent variables, so you wind up conflating variables with the result of expression evaluation (which you don't want to call a value.)
- Okay, I am using the same structure to represent both "variables" and "result of expressions", I will acknowledge. But "internal" variables follow certain naming traditions to avoid confusion with "regular" variables. I could introduce an additional different "kind" of structure to make sure they are clear, but that's "double clarity" because the described naming already is sufficient. The extra structure complicates the design without offering sufficient benefits in exchange. It's mental SafetyGoldPlating.
- Isn't it more complex -- and less likely to be easily understood -- to rely on naming conventions to disambiguate an unfamiliar programming concept (I doubt your audience of programmers will have ever heard of an "internal variable"; indeed, I never heard of it until you used the term), than simply use two structures to identify the familiar programming concepts of "value" and "variable"? Wouldn't "double clarity", as you call it, be something worth having? If "clarity" is good, isn't "double clarity" better?
- It's a model-specific term, not unlike "representation". And no, double clarity is redundant. If the additional clarity layer complicates the model, then its downsides outweigh the benefits. The "mental crashes" and abandonment caused by a more complicated model become more likely or more severe than those caused by lack of an extra assurance mechanism ("double clarity") by my estimate.
- "Representation" isn't a model-specific term; it's used in the conventional manner. How does distinguishing "value" and "variable", which language documentation typically does already (whether it defines these terms or not is something else), potentially result in "mental crashes" (what are they? sounds scary!) and abandonment? (Abandonment? Really?) You make it sound like the difference is between simple algebra and tensor calculus, when in fact it's like the difference between some simple algebra and some other simple algebra, except your simple algebra doesn't distinguish variables from values.
- The way your model uses representation is model-specific. Generally "representation" in terms of app programming means the formatting a particular operation uses to present content to a human reader or external device. As far as why it would complicate the model, I'd probably have to create 2 sets of API's, one set for internal variables (whatever they are called) and one set for regular variables. I may be able to factor some of the implementation to a single spot, but that creates an extra layer of indirection or call levels. And it would increase the code base by roughly 30% to 50%. If a model is too complex or "long", people are more likely to abandon it (give up). To try to put proportions on that, 40% more code means grokking takes on average roughly 40% longer (linear is perhaps the wrong scaling model, but good enough for very rough estimations). And I roughly estimate it would increase the abandonment rate by 40% also. On the other hand, issues caused by confusing internal variables with external ones would run roughly around 5% for the same metrics. Remember, we already have one "protection" or self-documenting internal-ness-labelling mechanism in place due to the naming conventions of internal variables. Thus, the prevented "problem incidents" resulting from a second such mechanism will be small. -t
- No, my use of "representation" is not model-specific. It's consistent with what "representation" means, which is "how something is represented." In my model, "representation" is how a value is represented in RAM. In "app programming", it's how content is represented to a user. I'm not sure why you think my approach would be more complex for distinguishing values from variables. It's trivial to model using, say, Java, and seems considerably simpler than your pseudo-code:
class Value {
private Bitstring representation;
private Type type;
public Value(BitString representation, Type type) {
this.representation = representation;
this.type = type;
}
public BitString getRepresentation() {return representation;}
public Type getType() {return type;}
}
// Variables are a hashmap defining mutable relationships between names and values.
HashMap<String, Value> variables = new HashMap<String, Value>();
.
- Most developers don't overly concern themselves with bit-level RAM "representation" such that it's the least common interpretation of the two kinds described. And call it "RAM bit pattern" or the like if that's what you REALLY intended.
- Sure, "RAM bit pattern" is fine, except it could be something other than a bit pattern and could be (say) on disk instead of RAM. The name doesn't matter. What matters is that it must exist, regardless what it's called, or even whether it's shown or not.
- And if you think the above makes a better imperative model, then go ahead and code one up. It's an open wiki: code away, dude! (Personally, I think Java is too verbose and fastidious for an experimental model, but different heads like different things for different purposes.) You could have been done by now if you coded instead of complained about my model. -t
- I did. I coded it above. That's it. As far as I can tell from reading TopsTagModelTwo, it covers exactly the same territory with less code.
- It's barely more than a data structure. It doesn't show where and how to use it.
- That's true, it's only an illustrative core so I've turned it into a simple but usable interpreter. See http://shark.armchair.mb.ca/~dave/Sili
- It didn't appear very strait-forward to me. Its source code size is far larger than the tag model, for one.
- That's because it's a working interpreter, including a parser, types, operators and error-checking. I've provided sample scripts, and you can run them. Admittedly, it's not very sophisticated, probably quite buggy, and there are some quirky limitations -- a function 'return' statement can only be the last line of the function body, and recursive function calls don't work. It also has some cute features -- it supports nested functions, for example -- and referencing variables has been optimised. So, there's a lot more to it than your tag model. The code that performs the equivalent functions to your tag model, however, is considerably simpler than your tag model.
- Do you have a reference to or snippet of such equivalent sections?
- uk.ac.derby.lpt.sili2.interpreter.Value up to /** Perform logical OR on this value and another. */ and defineVariable(), setValue() and getValue() from uk.ac.derby.lpt.sili2.interpreter.FunctionInvocation appear to do everything that your model's pseudocode does.
- It's just accessor-like API's. It tells me very little about usage or context. Anyhow, I'm happy to LetTheReaderDecide which model they find easier/simpler.
- Perhaps it seems like "just accessor-like API's" because it's so simple. When the parser recognises sequences of characters as string, integer, float or boolean literal, it creates a Value of an appropriate representation and Type. When the parser sees a variable assignment, it calls setValue() with the appropriate value. When the parser sees a variable reference, it calls getValue(). That's pretty much it.
- No, you still need API calls for variable processing, operators, and the relationships between them. I believe overall it's still more complicated, but I don't wish to present evidence today, and for now am happy to LetTheReaderDecide which model they find easier/simpler. (How the heck did it get into such a deep file tree path anyhow? Java conventions? Eiiig.)
- Built-in operators are defined in uk.ac.derby.lpt.sili2.interpreter.Value after /** Perform logical OR on this value and another. */ The relationship between values and variables is maintained (at run-time) via setValue() and getValue().
- The deep file tree path is a result of the Eclipse IDE translating Java package names into file paths. The Java package name is derived from the domain name in reverse -- derby.ac.uk; the name of a course I taught a few years ago in which I used a predecessor of this code for an exercise -- LPT (Languages, Platforms & Tools); the name of the project (there was an earlier version, hence the '2') -- Sili2; and finally the package hierarchy within the project. It's awkwardly long, but it creates a (hopefully) globally unique naming hierarchy that reduces the chances of namespace collisions.
- When do you have a representation that is not paired with a type? (Other than implicitly, if the type is always "string".)
- getType(). And maybe Write statement implementations: sometimes they can just grab the "representation" (cough) as-is.
- I don't understand. PHP's gettype() certainly makes reference to the type attribute associated with a representation, and I don't know of any Write statement implementation that doesn't make reference to both a representation and a type. Otherwise, how would Write know how to emit the representation? Without a type reference, is the representation 101001010111001101001101 supposed to be written as a string of four characters, a long integer, a floating point value, a date, a timestamp, or something else?
- The first part is not clear to me. The second part seems to be talking about C, not typical dynamic languages.
- I'll come back to the first part another time, but the second part is true in every programming language. Every value in every programming language has a representation, which is how it's stored in memory. In modern computing hardware and software, it's almost invariably a string of bits. How do you (or, more importantly, the interpreter) know whether a given string of bits represents a character string, a date, a pointer, a floating point value, a date, or a value of some other type?
- But that's from an interpreter/compiler-builder's perspective. The target audience is NOT interpreter/compiler-builders, and thus the audience is more likely to think of the "formatting" interpretation. This is another indication that your head has been "in the gears" for too long such that you don't understand the target audience's perspective.
- As I've pointed out several times before, you can do some HandWaving and avoid any mention of representation. The usual trick is to treat, for illustration purposes, all values as strings. However, inevitably somebody asks about it, and a discussion about representation is inevitable. Best, I think, to incorporate it from the beginning.
- That's right, the "discussion" you speak of clarifies its meaning/use in your SPECIFIC model.
- It has to exist in any model of values, types and variables, whether it's shown or not. A value is meaningless without both a type and a representation. Without a type, a representation is a meaningless bit string. Without a representation, all values of a given type are the same (unknown!) value.
- I'll agree it probably has to exist in some form. But hopefully without a goofy name.
- I'm not sure your personal opinion about what you consider to be a "goofy name" is relevant, especially as using "representation" for a value's representation seems rather appropriate.
- Your opinion is noted. Let's LetTheReaderDecide. We've been round and round on this already.
- That seems reasonable.
See DynamicTyping -- "Another term that is sometimes used is 'value typed' opposed to 'variable or reference typed'."
- "Sometimes"? That's not exactly damning. And I don't hear those terms much in the field. There are multiple ways to view and describe "variables" floating around. Types are the wild wild west of documentation.
- I know of no popular dynamically-typed imperative programming language where variables are typed. Do you?
- It depends how you define "have" per such languages. Once it's clearly defined, then we can talk about how to measure it (having versus not having).
- I didn't use the word "have", but let me rephrase the question: I know of no popular dynamically-typed imperative programming language where variables have an explicit "type" property. Do you?
- I could have sworn it used to say "have" instead of "are". Maybe I replied to the wrong spot for a similar statement. Anyhow, "are" is equally vague and doesn't really change the nature of the problem. As far as what variables "explicitly have", I don't have enough info to say yes or no. I don't know of any objective and universally-accepted test to measure "explicitly have" beyond modelling with and without and comparing to the results; i.e. empirical science. I do know that modelling it that way often "works" in terms of IoProfile, or approximations thereof (for scalars). I do not believe most developers ponder such issues that deeply, and swap between mental models in a cavalier fashion because most documentation either doesn't clearly nail down what's in what, and/or contradict each other. -t
- It never said "have". It doesn't matter whether it says "have" or "are" or anything else. This is conceptually trivial: Either a variable is defined with a structure that has an explicit "type" property, or it isn't. You can define it as "class Variable {Type type, Value value}" or "class Variable {Value value}". In dynamically-typed languages, the appropriate model is the latter.
- You have not demonstrated that it's the "appropriate model", other than perhaps matching actual implementation, which is not a key goal. (And the "have" appears to be from the Perl discussion. I mis-pasted. My apologies.)
- No, it matches actual semantics. As I noted above, there is no operation on variables in PHP, Perl, Python, Ruby or Javascript outside of "store a value" and "retrieve a value".
- You only claim it matches. You have not done the rigor to demonstrate such. And the "store" thing has been addressed above.
- What does "done the rigor" mean? Where has "the 'store' thing" been addressed? It matches because it's trivial to demonstrate that everything you need to do with a variable in a dynamically-typed popular imperative programming language can be expressed with "store a value" and "retrieve a value".
- Depends on how one defines "value". And "can" and "canonical" are not the same thing.
- It is canonical. I know of no dynamically-typed language that requires -- or even would benefit from -- any semantics for variables other than "store a value" and "retrieve a value". Do you?
- I'd have to compare two+ models side by side to score the net benefits. No, it's not absolutely "required" because there are alternative models. It's not absolutely required to nest a "value" structure inside of a "variable" structure either, like you do. Thus, required-ness of a feature and net worth of model are not necessarily related.
- I'm not asking you a question about models, but about popular dynamically-typed imperative programming languages. Do you know any popular dynamically-typed imperative programming language where the semantics for a variable are not either "store a value", or "retrieve a value"?
- Sure, it's possible to define a model in which values are not nested in variables, but I don't know of any popular dynamically-typed imperative programming language where a variable doesn't store a value. Therefore, it makes sense to "nest" values in variables. It would be equally reasonable to employ any mechanism that shows a time-varying relation between a variable name and a value, because that's what a variable is. Of course, in that case a value should be shown as an immutable relation between a representation and a type, because that's what a value is.
- That's your personal description of how variables "work". That's not an objective statement. You have not produced any formal proof, just round-about and obtuse word-play. The model "works" withOUT using your preferred version of variable/value (per IoProfile testing). Now, you probably claim it doesn't match descriptions given in manuals, but most such manuals do not formally or clearly describe "values". They are mentioned mostly in passing.
- Manuals rarely formally or clearly describe variables, values, or even types because it's assumed these things are understood. My description consists of my words for that understanding, but you'll find similar words -- describing the same concepts -- over and over again in introductory ComputerScience, computer engineering, and computer architecture textbooks.
- Yes, but often they are vague, overloaded, or contradictory across languages/authors. As rough, fuzzy notions, yes they are indeed common. This is partly because there are different ways to model or explain the same thing (results).
- They may be vague, overloaded, or contradictory in dynamically-typed language documentation, but they aren't vague, overloaded, or contradictory in introductory ComputerScience, computer engineering, and computer architecture textbooks.
- A similar such claim was made a bit above the "[Brilliant]" comment below.
- Perhaps you haven't read many (any?) textbooks.
- If they say what you claim they do, then quote the damned things! It's not my job to find and present YOUR case.
- Haven't we been through a round of this already?
- Yip, like most of the stuff here. You say, "Statement X obviously means Y" when to me it either means Z or can mean both Y and Z. -t
- That's probably due to your lack of background in fundamental ComputerScience. You are applying inapplicable interpretations.
- Bull. You are making up what is "fundamental" out of your caboose. Otherwise, you'd be able to provide a coherent and clear source and/or proof for your claims, such as ItemizedClearLogic. "If you were as smart as me, you'd JUST KNOW what values really are" is malarkey logic. If all your ComputerScience books had all the evidence, you could site it all and put the quotes together on one page, with labelled references, showing how they all clearly reinforce each other and have an unambiguous chain of logic toward proving your claims. Solid logic is solid logic. Grow some.
- Sure. Here are some references:
- http://en.wikipedia.org/wiki/Data_segment -- "A data segment is a portion of virtual address space of a program, which contains the global variables and static variables that are initialized by the programmer. The size of this segment is determined by the values placed there by the programmer before the program was compiled or assembled, and does not change at run-time. The data segment is read-write, since the values of the variables can be altered at run-time."
- What is this illustrating exactly with regard to our discussion or proofs?
- That the terms "value" and "variable" are neither ambiguous nor overloaded, and are used here and elsewhere in the conventional manner.
- You've claimed that probably a dozen times, at least, but have not given decent evidence, only ArgumentFromAuthority. I don't see clarity and rigor in these sources. See PageAnchor canon-level.
- The evidence is in the linked content. It does not contradict my descriptions.
- Sorry, I don't see such evidence in there. And it doesn't contradict my model either.
- http://en.wikipedia.org/wiki/Binary_data -- "A discrete variable which can take only one state contains zero information, and 2 is the next natural number after 1. That's why the bit, a variable with only two possible values, is a standard primary unit of information."
- What is this illustrating exactly with regard to our discussion or proofs?
- That the terms "value" and "variable" are neither ambiguous nor overloaded, and are used here and elsewhere in the conventional manner.
- See PageAnchor canon-level.
- The evidence is in the linked content. It does not contradict my descriptions.
- Sorry, I don't see such evidence in there. And it doesn't contradict my model either.
- http://www.fastchip.net/howcomputerswork/bookbpdf.pdf -- Search for "value", but it wouldn't hurt to read all of it.
- What is this illustrating exactly with regard to our discussion or proofs?
- That the terms "value" and "variable" are neither ambiguous nor overloaded, and are used here and elsewhere in the conventional manner.
- See PageAnchor canon-level.
- The evidence is in the linked content. It does not contradict my descriptions.
- Sorry, I don't see such evidence in there. And it doesn't contradict my model either.
- http://docs.oracle.com/javase/specs/jls/se5.0/html/typesValues.html -- "... every variable and every expression has a type that is known at compile time. Types limit the values that a variable ... can hold or that an expression can produce, limit the operations supported on those values, and determine the meaning of the operations."
- "Every variable...has a type...". Hmmmm, I thought you were against this view. Granted, it's probably talking about compilers, not interpreters, but it is saying that in some languages, variables can "have" types.
- The reference is to Java, which is a statically-typed language. Therefore, its variables have types, in order to perform type-checking prior to execution. Popular imperative dynamically-typed language variables do not have types, because they can be assigned any value of any type at any time. This is orthogonal to whether the language implementation is a compiler or interpreter.
- PageAnchor canon-level
- If it's talking about Java, then why reference it? We are discussing dynamic languages. And your description there is your personal model, not the canonical model. If you found say 3 dynamic language manuals that directly said "variables have no types", you may have a decent case. But, you don't. FAIL. One or two may be author-specific descriptions/notions such that it's too weak of a frequency to "count" in terms of canonical-ness. Three establishes it as at least a "common notion". But do note that being common and being "the canon" are not the same thing. For example, perhaps half agree with your model of variables and half agree with mine. That would mean that half the manuals could end up backing your favored view. But half does not make a canon. But since you've yet to crack the 0.0000001% mark, we are not even close to a canon-level debate.
- That variables are untyped is conventional understanding about DynamicallyTyped languages, and so does not need to appear in language documentation. It appears elsewhere. See, for example, http://stackoverflow.com/questions/1517582/what-is-the-difference-between-statically-typed-and-dynamically-typed-languages -- "... In dynamic typing, types are associated with values not variables. ..." However, the most compelling evidence lies in language semantics. The appearance of a variable name in an expression denotes retrieval of the variable's value. The appearance of a variable name in the left-hand side of an assignment denotes assignment of the variable's value. There is nothing in DynamicallyTyped language semantics that denotes retrieval or storage of a variable's type independent of its value. Contrast this with StaticallyTyped language semantics, in which many languages employ ManifestTyping to explicitly associate a type with each variable, independent of its value.
- That's not a "manual", it's some guy's blog, more or less. But I'll compromise and count it as half a point. Re: "There is nothing in DynamicallyTyped language semantics that denotes retrieval or storage of a variable's type independent of its value." -- getType()-like functions do it. You use words like "compelling" and "obviously" so casually. "Compelling" would be like 20 manuals that explicitly stated such.
- No, getType()-like functions do not "do it". If they did, they would only work on variables. However, every implementation of a getType()-like function accepts an expression as an argument. Every reference to a variable in (or as) an expression retrieves its value. Therefore, an invocation like getType(p) evaluates its argument expression to obtain a value -- i.e., it obtains the value in variable 'p' -- and getType() returns the name of that value's type.
- That's one of multiple possible explanations/models/implementations. That's not an objective and singular truth.
- Among popular imperative programming languages, it is.
- So you claim.
- Feel free to present evidence to the contrary.
- I already did: it's not the only possible working model.
- It may not be the only possible working model -- though I doubt there are others that would be as simple and efficient -- but it is the only model used by popular imperative programming languages.
- You have not proved the "used by" claim. It's only a claim. I will grant you "efficient"; I do NOT make any claim to machine-wise efficiency in the tag model. You get that particular prize. Congratulations.
- It may only be a claim (my claims to having read the language reference manuals and looked at the language source code would, I suppose, also be claims), but it's an obvious one, especially as it's highly unlikely that any production language would use something less efficient.
- The vast majority of language reference manuals do NOT contradict the tag model (except maybe some OO-centric languages, but both models need adjustments to fit). And machine efficiency is a low priority on my model design criteria such that other factors trump it. Let's not repeat all our trade-off fights again.
- How do the descriptions at TypeSystemCategoriesInImperativeLanguages need adjustments to fit some OO-centric languages?
- You'd need to rework the structure to be something like:
<variable name="foo">
<object>
<value ...>
</object>
</variable>
.
- Granted, it may not matter much or at all in terms of IoProfile for scalars in either model.
- Further, many authors' descriptions are shaped by implementation because that's what they are familiar with from "compiler class" or whatnot. However, that doesn't necessarily mean most developers share that view.
- The views of "most developers" are irrelevant. The view of most 5-year-old children in the western world is that there is a Santa Claus, but that doesn't mean the rest of us should believe in Santa Claus.
- As I keep having to explain repeatedly, the target audience for my model is "typical developers". You keep talking about "commonly understood semantics", but then ignore the "common". You appear to be flip-flopping again, Mr. Romney. Please clarify the scope of "commonly understood semantics" once and for all.
- "Commonly understood" appears to be your phrase, not mine. Canonical semantics are based on what is written. How what is written is (mis)understood by "most developers" is irrelevant.
- If many or most don't accept it, it's not canonical. Anyhow, that's moot because the tag model does not directly contradict the vast majority of written manuals either.
- Whether something is canonical (or not) is not determined by a popularity contest. As for the fact that your tag model does not "directly contradict the vast majority of written manuals", I don't recall the last time I saw a programming language manual that mentioned "internal variables", "anonymous constants" and "type tags". It may not contradict written manuals, but it doesn't appear to reflect them, either.
- In our industry generally such is determined by popularity. And a (complete) model may use stacks, but manuals usually don't mention stacks. Double standard. Those are model/implementation-specific details or design choices. -t
- "Popular" is not the same as "canonical". The former does not determine the latter, and vice versa. If your "internal variables", "anonymous constants" and "type tags" are only your personal terms for parts of an implementation, that's fine. However, it appears you've tried to introduce these as general terminology, suggesting that not only are "internal variables", "anonymous constants" and "type tags" parts of your "tag model", they are -- by virtue of having corresponding parts in the tag model -- parts of DynamicallyTyped languages in general. Is that not the case?
- I am not aware of such. If you have specific issues with the wording of the documentation, please point out these specific spots, suggest alternatives, and I'll be happy to consider them and make changes if they improve the docs. As far as "canonical", it came about in association with religious organizations. If and how it applies to the IT world, I cannot say with any certainty. It's another case where regular English is not rigorous enough to address specific issues. Designating the manual that came with the programming language as the canonical authority may be an arbitrary designation. The "notions of the masses" matter, in my opinion, similar to how definitions are determined by actual usage, not by the creator of the term. I consider the audience of the model as they currently are, not as they "should be". You seem to be should-ifying again here. Typical programmers have no "solid" and/or consistent model or notion about the association between values, variables, and types based on my field experience. They have or use rough notions or just rely on experience-based habits. This may differ from your experience with typical programmers such that we'll have to deal with the AnecdoteImpasse and LetTheReaderDecide which best fits their experience or situation. No use arguing about such without new evidence. Repetition of your opinion will NOT change my mind.
- I notice that you've changed some of the wording in TopsTagModelTwo to clarify your use of terminology. That's good. However, if "[t]ypical programmers have no 'solid' and/or consistent model or notion about the association between values, variables, and types", then they should be encouraged to learn based on documented language semantics and perhaps how interpreters and compilers actually work, rather than some arbitrary alternative.
- Yes, I did review the disclaimer on that page and expanded it. And the tag model does NOT contradict the vast majority of such manuals (except maybe some OO-centric languages, which require refitting in both models). As far as matching how interpreters "actually work", that's not a primary goal of the model. The reader/user is allowed to select a model that best matches his or her goals. Even you agreed that implementation-matching may complicate a model (caching techniques, etc.). And all implementations are "arbitrary". There are many ways to skin a cat.
- Dude, your repeated anti-cat quips are worrisome, even when employing well-worn clichés.
- http://www.arduino.cc/en/Reference/VariableDeclaration -- "A variable is a way of naming and storing a value for later use by the program ..."
- That does NOT contradict my model. You should have realized that before posting it, but it appears you don't know how to process English with a critical eye. If you can force it to fit your favorite head model, you don't seem to bother considering ALTERNATIVE meanings it may legitimately convey to some. I notice this general pattern in your quoting and interpreting. For example, saying that "expressions return values" does not preclude them from returning other additional things. But you never seem to realize this, injecting an "only" into it in your mind to make sure it fits your preconceived notions. --top
- I have no objection to "ALTERNATIVE meanings" as long as they are not clearly inferior to conventional explanations.
- It's only "clearly inferior" in your head that makes up fake canons.
- What's a "fake canon"?
- A claim that one viewpoint is a canon when in fact that's either not true, or has not been established via evidence.
- ...etc...
- Hopefully not. The first batch was utter crap.
- Without some qualification to "utter crap", your opinion is worth precisely what I paid for it.
- You are an inferior thinker because you mistake your personal viewpoints or interpretations as universal truths outside of your own head.
- You continue to misuse the term "universal truth". I allege no "universal truth". I reference only conventional understanding. You are attempting to introduce a non-conventional explanation for DynamicallyTyped language behaviour, but it offers no benefits over conventional explanations. Your claim that it benefits from avoidance of "nesting" has no basis in evidence, and is trivially refuted by the fact that programmers, even weak ones, inevitably employ "nesting" on an unavoidably continual basis.
- You provide insufficient evidence of your "conventional" claims. Repetition does not persuade me; I'm not your mother. You STILL don't get the cookie you've requested 83 times. If there is a one-to-one relationship between two groups of attributes, it's usually bad factoring to make them a nested relationship. Factoring 101. If there was a "common notion" that expects such as you claim, you may have a palatable argument from that angle, but looking at the design by itself, independent of "programming language semantics" (including history and tradition), one sees no parsimony-related reason to nest.
See http://www.php.net/manual/en/language.variables.basics.php -- "By default, variables are always assigned by value. That is to say, when you assign an expression to a variable, the entire value of the original expression is copied into the destination variable. This means, for instance, that after assigning one variable's value to another, changing one of those variables will have no effect on the other." -- No evidence of a 'type' attribute associated with variables.
- AbsenceOfEvidenceIsNotEvidenceOfAbsence. Just because "type" is not mentioned doesn't mean it doesn't exist. That statement is just clarifying Php is not using object references or something similar, being some OO languages would have such side effects. It's setting the stage for the "&" discussion that comes just after by contrasting regular behavior with the reference-based behavior. It's not defining nor discussing a type system there. Why you think it's talking about types, I don't know. Obviously, you interpreted it different than I did. I also suspect it's using the common terminology "by value" that comes with the "by value" versus "by reference" pairing. It's thus borrowing common phrases for their familiarity alone and is not attempting to define "value" in any rigorous way.
- In PHP, we know any value can be assigned to any PHP variable at any time. Hence, no type-checking on assignment, therefore PHP variables do not have types.
- It doesn't have to check if it copies the type tag over from the expression results as-is. Lack of checking proves zilch.
- What do you mean? Do you mean it copies the "type tag" from the expression results into some attribute of the variable?
- Yes. See how "plus" calls "updateVar" at the end with a tag parameter and a value parameter? Note your statement made me realize I was missing a function from the API that allowed direct copying of variables because most examples focused on operators. I just added "copyVar", and clarified the usage of the "assign" function. Thank You for helping make the tag model/kit more thorough. I'll call you one less mean name as compensation :-)
- Why would you need to copy variables? There's no such thing in the semantics of popular imperative programming languages.
- "a=b;". Granted, there is a wide range of different ways people describe it. But, "copy a variable" is typical in colloquial programming in my experience. Other variations include "copy the contents of a variable", and "copy a variable's value" and "copy the data from...". Content, data, and value tend to be used interchangeably. "Content" tends to be a bit more general in that it may be a structure(s) also, whereas "value" usually refers to a specific singular number, string, date, etc. -t
- You don't need a "copy variable" operation, because it's retrieving the value from 'b' and assigning that value to 'a'. At best, it might be an optimisation, but that would be an implementation issue.
- You said "there is no such thing...in popular [semantics]". Whether it's "needed" or not is a different issue. Many would agree with my description of it.
- If it's not needed, why have it? As for "many would agree", that's a fallacious argument, per AdPopulum.
- Trade-offs. We've been over that already. And "common semantics" depends on AdPopulum by definition. Whether YOU personally like it or not, regular programmers often say "copy a variable" for "a=b;" kind of statements. Perhaps they meant something different, who knows, but we cannot x-ray their heads at this point. If you can go and spank all the developers who have the "wrong" head notions about variables and values and fix them up "right", I'll reconsider my design. Let me know when your grand Value Inquisition is complete.
- What is the "trade-off" benefit of having a "copy" operation for variables? Isn't that needless complexity? "Regular" programmers may say "copy a variable" as a convenient shorthand for "a=b;", but I'm sure even "regular" programmers know that the semantics of such a statement always means evaluating the expression to the right of '=' and storing its value in the variable specified on the left of '='.
- Your thought process is not the standard reference thought process for the universe. People have a variety of different models and prediction techniques in their head. Stop telling people how to think. As far as "needless complexity", it's back to weighing the net tradeoffs, per above. I find your nested-ness to be "needless complexity". It has more con's than pro's for the stated purpose of the model and stated developer wet-ware assumptions. -t
- I'm not telling people how to think. How people think is irrelevant.
- That kind of dismissal is probably the main reason why your model description sucks.
- So you say, but until you complete TypeSystemCategoriesInImperativeLanguagesCritique? (or some such), we'll have to assume you think it sucks either because you don't understand it, or because you know it provides a better basis for explaining programming language behaviour than your model but you're too proud to admit it.
- It would likely overlap with material in TypesAndAssociations in terms of your "English" parts, and repeat the "Chris Date" debate in [i forgot topic] for the set-based part. And I readily admit I don't understand many parts of your write-up, and doubt a good many other developers would also.
- TypesAndAssociations is a ThreadMess, and now largely incomprehensible, and whatever criticisms you had were trivially refuted. If there are things you don't understand in TypeSystemCategoriesInImperativeLanguages, please ask.
- "Trivially refuted" my ***. And I already did ask, but got evasive, vague, round-about, and/or obtuse answers.
- I don't recall many, if any, questions. Mostly, you offered quibbles about terminology.
- If the terminology is quibble-able, then perhaps the writing is not precise/clear enough. The alternative interpretations of your writing I supplied are not improbable. If they were improbable enough to otherwise ignore, then your "quibble" complaints would have some merit.
- If that criticism were coming from someone who didn't have a history of quibbling about everything, ad infinitum, I might take it seriously. As it was, the "alternative interpretations" appeared to be arbitrary qvetching rather than genuine considerations.
- You are the one with a history of quibbling.
- No, I'm not. You're known for it. Your moniker comes up repeatedly on this Wiki as sustaining argumentum ad infinitum. My name does not.
- That's because you don't sign. They may think you are 10 different anonymous annoying persons instead of just one.
- It's possible, but given that I regularly signed until a year or two ago and I haven't changed my approach in roughly a decade, it's unlikely.
- I have no way to verify that claim.
- Regardless, you appear to be the one attracting pages like TopNoiseFilter, not me. Why is that?
- What would they call it if they did? NoHandleNoiseFilter???? Your complaint makes no sense because you have no coherent identity. And somebody else did complain about your debating style (and mine) here. We perhaps are both hated.
- Perhaps, but it seems unlikely. If we were equally hated, I would expect to see complaints about the pages you/we have created, rather than just one of their participants, i.e., you.
- Sorry, I didn't understand your point. Anyhow, we don't have a good comparison base unless you also use a consistent handle for some years. Otherwise it's comparing apples to invisible apples.
See http://www.ruby-lang.org/en/documentation/ruby-from-other-languages/to-ruby-from-c-and-cpp/ -- "Objects are strongly typed (and variable names themselves have no type at all)." -- Quite explicit, here.
"Name"? Variable "names" don't have (explicit) types either in my model. (That's bad writing, regardless. Fire the bastard.)
It's bad writing, but the meaning is clear.
I disagree. You make it "clear" in your mind by seeing it only how you want to see it. You read English with heavy bias. Names "having" types is an odd concept, or odd wording. What would be an example of such? Like dollar signs in BASIC classic?
Note that the vast majority of such manuals do not define "value" and use it in kind of a cavalier way, sprinkled in here and there with minimal if any explanation.
The vast majority of manuals do not define "value" because the vast majority of manuals assume the reader is a programmer who knows what values are.
In a rough notion/feeling sense, sure.
No, in a ComputerScience sense.
No, ComputerScience does NOT formally define "value". And I thought you were addressing "commonly understood semantics", not the ivory tower sense. Make up your mind which audience you intend to satisfy with your model.
Sure it does, or it adopts the notion from mathematics. Look in any introductory ComputerScience text, which every professional programmer should have read.
- "Should" read? Perhaps, but you can't build tools based on what the user should do beforehand. Otherwise, the as-is-er's are going to bypass it or be confused by it. Dumb logic.
- If the "as-is-er's" (???) haven't read any ComputerScience texts, then perhaps they have the benefit of no preconceived notions, and can accept that variables are mutable and values are immutable without quibbling.
- You are the quibbler; you bitch about non-practical issues, inventing fake problems using bullshit-theory.
- Is your rudeness necessary? If you don't agree with my criticisms, counter them with logic and evidence. Crude language only reflects badly on you.
- You started the rudeness by labeling my complaints about your writeup "quibbling". If you don't like rudeness, then don't dish it out.
- I'm not using vulgar language. As for "quibbling", you are quibbling.
- I know you are, but what am I?
- Quibbling.
- I know you are, but what am I?
- Quibbling.
- If it's quibbling, then let's agree to LetTheReaderDecide and drop it. You seem to keep re-kindling these kinds of sub-arguments, making YOU the quibbler. I'm okay with LetTheReaderDecide. If the books clearly say X is Y then clearly quote it with clear citations and clear page numbers from a sufficient quantity of sources so that the reader can clearly see where you get your clear information from. Clear? -t
- I have provided on-line sources, so the WikiReader can follow links. However, I'm happy to LetTheReaderDecide and drop it.
- Agreed.
Sorry, I don't see it. And programming does not have to borrow everything from math. It can pick and choose what to borrow.
Whether you see it or not, it's there.
One of us is hallucinating.
If it was both of us, how would we know?
Everybody else would ignore us.
You could be hallucinating that everyone is ignoring us.
- [Right. So let me straighten out you foo(l)s. A value in computer science is a set of bits while simultaneously a notion held within the mind of the programmer. This whole 100 page rant is about irreconcilable polarity surrounding the two. But to understand that you'll have to find the material in the Pangaia Project regarding the separation between virtual reality and physical reality (also, notably and ultimately, just a notion).] -- MarkJanssen
- I would note that I am fine with dual models/explanations. It's my opponent who's a stubborn OneTruePath-er. -t
- I'm delighted to embrace innumerable models/explanations. However, I object to models/explanations that are significantly flawed, especially when the flaws are poorly justified and easily eliminated.
- "Significantly flawed" to you is synonymous with "does not fit my personal preconceived notions". No significant practical flaw has been identified.
- Unique and undefined terminology, conflation of variables and values, and a model of variables that has attributes inconsistent with the semantics of variables in dynamically-typed languages all appear to be significant flaws. Whether the flaws in a model are practical or not, is a separate issue. Why not try turning your model into an actual interpreter? Then I think you'll quickly discover the impact of those flaws for yourself.
- They are not "values". You call them "values" because that's what you personally feel like labeling them. If you stop injecting your pet/favored notions into it, then it won't have these "theoretical" problems you keep "seeing". If you spot a practical issue, please do describe it. (I agree my approach may make modeling OTHER parts of interpreters difficult or complex, but it's structurally optimized for scalar type issue analysis, not other language "parts".)
- I call them "values" because that's what they are. I know of no popular imperative programming language that claims the result of evaluating an expression, or that the return value of a function, is a variable. Do you?
- Because what you are calling a "variable" there is an internal implementation/model issue, not a universal truth. I could have called it a flaffledoodle. Different model designs and naming were considered, and the "least evil" was an arrangement called "internal variable". I know you don't like my trade-off decisions. I've heard it already.
- I don't claim that what I call a "value" is a "universal truth" (you appear to misuse "universal truth" rather freely), only the usual term. The result of evaluating an expression, or what is returned by a function, is normally called a "value". It's never called a "variable". It's definitely never called an "internal variable", especially as it's not variable in the language.
- A model can call it's internal parts whatever the hell it wants. Planets don't have gears, but an orrery might. And expressions do return a value in my model. Rarely do manuals explicitly list what's not "returned", I would note. Saying X returns Y does not necessarily exclude Z from also being returned.
- Why would you model invisible components that may or may not be parts of specific implementations? They certainly don't affect the semantics of the language.
- Page_Anchor: gears-versus-spindles
- I thought you said they do affect the semantics? Make up your mind. Every model of such will probably have to create/invent parts which are not absolutely necessary in terms of matching expected external behavior. Models often have internal parts that are required for the specific model to run, but don't reflect the thing being modeled, like gears in an orerry. It's as if you use thread spindles instead of gears, and then complain that the gears in my model are not in the real thing. But neither are your thread spindles. (Yes, you probably think your "spindles" ARE part of objective reality, but that's because you mistake your personal pet notions for universal objective truths. You have a problem in that area.)
- The semantics of every popular imperative programming language all state (or strongly imply) that the result of evaluating an expression, or what is returned by a function, is a value. I know of no semantics that state (or strongly imply) that the result of evaluating an expression, or what is returned by a function, is a value along with something else. If there's no semantically significant "something else", why model it?
- As written, almost none of such statements contradict my model. As far as the design tradeoffs of the parts, I've been over and over that. I'm sorry you disagree with those choices. I've described the reasoning behind those tradeoffs as carefully and detailed as I know how. LetTheReaderDecide whether those explanations are sufficient. Most of our disagreement appears to be over assumptions about programmer WetWare, including "common notions" they may have. Without real studies, it's an AnecdoteImpasse.
- That's not what we're debating. Thanks for trying to "straighten out" us "foo(l)s", though. That's what the world needs; more superficial attempts to "straighten out" other people, especially with insults. Insults are a great way to get people on your side, because when they realise your insult was really, really good -- like "foo(l)s"; that's the best. I am a foo, and we're definitely a pair of foos -- they'll do everything you say.
Odd, what happened to MarkJanssen's comments? -t
Oops, he edited whilst I was editing so I pushed my edits, and I was going to put his back in but I forgot. I'll do it now. Thanks for catching that.
Computer software is all about making sense of a whole lot of states and logical gates. The rest is all sugar. -- ChaunceyGardiner
Perhaps. Another view is that a whole lot of states and logical gates are just some electro-mechanical sugar we use to speed up the execution of computer software.
CategoryMetaDiscussion
JanuaryFourteen