Continued from ValueExistenceProofTwo (was WeakProgrammersRelyOnBadDocumentation)
It's not hard to get confused over dynamic typing because of the horrid writing about it out there.
Speak for yourself.
If you think your writing is clear to most, you are truly delusional.
What does that have to do with what I wrote? You're claiming it's "not hard to get confused over dynamic typing because of the horrid writing about it". That implies that it's easy to get confused about dynamic typing. I see no evidence that this is so; it appears to be nothing more than a personal opinion. If it's nothing more than a personal opinion, then you should make it clear that it's your own opinion, and not necessarily anyone else's. It certainly isn't mine. Whilst I agree that there's a lot of bad documentation, it is no hindrance to understanding popular imperative dynamically-typed programming languages. They are trivially simple. Given the number of programmers successfully writing programs -- including children of single-digit ages successfully programming RaspberryPis and the like -- it seems bad writing is no hindrance to understanding dynamically-typed imperative programming languages.
We've been over this many times already repeatedly and redundantly. It's my personal observation about typical programmers, and apparently it differs from your personal observation about programmers. It's an AnecdoteImpasse. Your life is not the standard reference source for universal truth.
Re: "you should make it clear that it's your own opinion" -- I made it clear it was my personal observation multiple times. If you forgot that and mistook it for an official study due to your forgetfulness, it's not my goddam fault. Get some Reagan meds for it. You seem to be mistaking your lack of memory for lack of existence. On the upside, you don't have to buy new magazines because the same one can be reused every week and always seem fresh to you.
Whilst I can certainly remember the few times you've agreed that it's merely your own observation, and by no means a general condition, it's unlikely the casual WikiReader will assemble your collected unsigned writings into a cohesive statement of your views. Therefore, when you write, "[i]t's not hard to get confused over dynamic typing because of the horrid writing about it out there," it has all the appearances of a statement of truth when it is not.
- You just did the very same thing there. You claimed it's false without telling the source of that info. No OfficialCertifiedDoubleBlindPeerReviewedPublishedStudy is cited. Another one of your double standards. You make a lot of claims out of the blue without citations from an OfficialCertifiedDoubleBlindPeerReviewedPublishedStudy or anything close.
- In order to question a questionable claim, I do not require evidence; it is sufficient to note the apparent lack of evidence to support the questionable claim. In response, you either need to provide evidence to support your questionable claim, or you should withdraw it or clearly indicate that it is speculative. ExtraordinaryClaimsRequireExtraordinaryEvidence.
- "Extraordinary"? I missed that before. What exactly is my "extraordinary claim"?
- Lack of formal evidence doesn't make a statement non-truthful ("appearances of a statement of truth when it is not"). Thus, you ARE doing the same thing.
- You made a claim, using language intended to make it appear to be a statement of fact. I made the observation that the claim lacks evidence, because it does lack evidence. That has nothing to do with whether your claim is true or not, though the lack of evidence leads one to suspect it is not true.
- That's bullshit! There is no intention to deceive, Buttface! Most of your claims lack formal evidence also, but I'm not going to fuss about every instance of such lackage as if there is a conspiracy to deceive, because I'd sound like a pesky annoying fastidious Sheldon-like asshole if I did.
- I didn't say you intended to deceive, only that you intended to present your claim as a fact. I presume that's because you believe it to be a fact. It is, I am sure, an honest belief. However, that doesn't mean it's an evidenced belief.
- Projection.
- What does "projection" have to do with what I wrote? Do you disagree with any of it?
- You are projecting because your claim of "fitting common semantics" definitely "lack[s] formal evidence" (your phrase), yet you believe it to be a fact. You have NOT formally proved it, or anything even remotely close. Thus, you are doing just what you accuse me of. Anyhow, I'm tired of arguing over arguing here. Present better evidence or accept a stalemate (pending real WetWare research).
- ["Lack of formal evidence" is your phrase here. The one you are accusing of projecting doesn't appear to have used it in this argument.]]
- Okay, my apologies for misquoting, and thank you for identifying the mistake. In that case, informal evidence is nothing to fuss about. I don't know what all the hoopla is then. It's best to ask specific questions AT specific spots of text rather than make sweeping general accusations. Sweeping general accusations rarely improve anything, and just add unhelpful tension. He could have simply either asked, "Where's the evidence for the documentation being poor?" or "I suggest your mark that statement with your handle to avoid context confusion". That would much quicker and smoother and specific than the accusation tone actually taken and we wouldn't have wasted all this text and time complaining about complaining.
- [The hoopla is that you haven't even provided informal evidence, just the claim.]
- I did. You rejected it because you are stubbornly defending your hardware-centric turf.
- What part of your claim, "It's not hard to get confused over dynamic typing because of the horrid writing about it out there" is the evidence, informal or otherwise? By the way, what's a "hardware-centric turf"?
- We've been over that already many times. I'm not going to repeat it here. If you forgot, too fucking bad. Oh, and where is your example of dynamic type writing done well? Volcano?
- [In other words, you can't point out where you provided evidence for that claim.]
- Those are your words, not mine.
- It's true, I should mark paragraphs with my handle more frequently, ideally, but YOU don't mark either. In fact, you don't even have a fucking handle. Pot. Kettle. Black. A guy without a handle complaining that I don't sign enough, get a load that, people! Stop being a hypocritical dick. I have a lot of reasons to complain about your writing and evidence presentation style, but arguing about arguing gets boring after a while, at least to most of us. Thus, I suggest you not throw rocks from your glass house. Anyone who's been married knows that criticism invites counter criticism and it can flow fast and furious once it gets started. Often it's better to shut up about small things. -t
- I'm not complaining about your lack of handle. That's fine. I'm complaining about your assumption that the casual reader should be aware -- despite the lack of handle -- that you "made it clear it was my personal observation multiple times" (see above).
- Strange that you don't seem bothered when your OWN writing has the same issue.
- Feel free to point out where I have stated personal opinions as generally-accepted facts.
- The problem is you THINK they are facts because you mistake your personal head models for universal truths. Pointing them out wouldn't add new complaints nor new material.
- I have claimed no universal truths, only emphasised that which is generally accepted. It would appear that the actual problem stems from your mistaken belief that because multiple models may be possible, that all possible models should be equally recognised and accepted.
- You didn't justify "generally accepted" (outside of interpreter writers). And I didn't claim all models should be equally accepted: they should be weighed against explicitly stated goals. I gave reasons and justification as detailed as a can for the decisions I made with regard to the tag model, and those decisions closely fit both the goals and the stated assumptions about typical developer WetWare, which I described in as much detail as I can. We just come to a AnecdoteImpasse regarding the WetWare evidence we use.
- "Generally accepted" is trivially evidenced by the wealth of computer architecture texts and language documentation that present the same model. Your goals and stated assumptions are bogus; they appear to be constructed solely to sustain an already exceedingly weak argument in favour of your model. It looks like nothing more than an attempt to rationalise your erroneous omission of a valid concept of values.
- For compiler/interpreter BUILDER documentation, perhaps they present the "same model". But that's NOT true outside of constructor docs. You seem to be conflating builder docs with user docs. Otherwise, we'd have to consider stacks part of the standard semantics, because they are presented in the same or similar material. And how are the goals "bogus" exactly? It's kind of late to start complaining about my stated goals. And you tossed in the "erroneous" claim as if it was a fact without citations or links. Didn't you just get in my ear about that kind thing? -t
- PageAnchor elbow-grease-72
- And if non-builder docs indeed do back your model unambiguously, please quote them and show exactly how they uniquely fit your model, word for word. If you don't want to do the elbow work, then shut the hell up and concede the one-and-only-one-semantics claim like an honest, trustworthy scientists would, instead of an ego-driven stubbornite.
- See http://docs.python.org/2/reference/ The first sentence is, "This reference manual describes the syntax and 'core semantics' of the language." See http://www.rubyist.net/~slagell/ruby/variables.html "The variables ... have no type." See http://docs.python.org/dev/reference/datamodel.html#objects-values-and-types "An object’s mutability is determined by its type; for instance, numbers, strings and tuples are immutable [...]" Etc.
- Probably because the "type" is in the object, which your model also lacks as given. See near "Here's a contradiction between the Python doc" for more on this.
And I've pointed out many times with dynamic languages it's usually easy to "spot fix" specific problems encountered without having a fuller understanding of the underlying
TypeSystem of a specific language. I've done so myself with newly encountered languages. The practical differences are usually somewhat subtle such that one can
get by just fine with general and rough notions of various dynamic typing approaches of various dynamic languages. A rigorous understanding of a particular language's dynamic
TypeSystem is thus NOT necessary to be productive. Your implication appears to be "they are productive in the language, therefore they have a rigorous understanding of its type system". That's false logic.
The TypeSystem implementations of popular imperative programming languages, including the DynamicallyTyped ones, are trivially simple. I can only attribute an apparent lack of understanding, in an otherwise capable programmer, to laziness.
The kind of or pool of "rules" (techniques) they tend to use are indeed relatively simple, HOWEVER, the implementation profile is not. It's almost like saying binary numbers are simple because they only have "1" and "0". However, knowing when/what/where the "1" is used instead of a "0" can be a bear. One cannot know the behavior of each operator just by looking at the basic operator description or guessing based on the operator name using experience from similar languages. For example, it's not easy to know whether each operator uses type-tag based (explicit type indicator) or parse-based dispatching/selection-of type-related behavior. And then there are inconsistencies such as Php's is_number() versus is_bool() where one uses what I call "parse-based" typing and the other uses what I call "tag based" typing. And some languages like ColdFusion don't have (detectable) type tags at all for scalars.
That's some fine molehill mountaineering you've done there. This is all trivial stuff, it's even well-explained on a function-by-function basis in the otherwise-dire PHP manual, and also described in TypeSystemCategoriesInImperativeLanguages. There's no need for constructing new terminology like 'parse-based typing' and 'tag based typing' -- all values have a type, and some operators in some languages rely on the fact that a literal of one type can encode literals of other types. That accounts for "inconsistencies such as Php's is_number() versus is_bool()" and why what you describe as, "languages like ColdFusion don't have (detectable) type tags at all for scalars", is simply the fact that in ColdFusion all values are strings so they can encode any literal of any type.
In my personal observations and casual conversations with field programmers, the vast majority don't use any rigorous model to know/remember which technique a given operator uses. A good portion of the time it "just works" because tag-based and parse-based ACT very similar under typical usage patterns. And if problems or confusion is encountered, one can "shore up" related code by putting in explicit validation at the form (data source), or wrap variables/expressions with explicit conversion operations, such as Round() or toNumber(), depending on the language or circumstance. One can "organically deal with them" to get work done: knowledge of a clear model is NOT necessary to get work done[1]. Still, it's nice to have an explicit model and know the "tag profile" of each operator of a given language. The manuals are poor at providing this info, and you agreed that at least Php did a poor job at such. -t
Why do you believe your "tag model" will make language quirks any less quirky, or any less likely to be worked around in the usual fashion by typical programmers?
- I'm not sure I understand your question. It doesn't "fix" existing languages, but merely helps model them to predict their behavior. (My "fix" is to rid explicit type indicators (tags) and rid base-library/language overloading of diverse types (such as "+" for concatenation). Sometimes one can avoid ugly or bloated work-arounds if they know how the language actually "works". For example, I often see this ugly pattern: "x = toNumber(a) + toNumber(b) + toNumber(c) + toNumber(d);". This is done to prevent a rogue "stringy number" from forcing it to concatenation instead of addition. If one has a better knowledge of how types are processed in a given language, they may be able to shorten it to "x = a + b + c + d;", making the code shorter and easier to read.
- Why do you need a model to predict language behaviour when the best predictor of language behaviour is the language itself? Furthermore, how will a general model address language-specific quirks?
- You mean doing experiments? That's fine and I perfectly agree. However, it's still helpful to know "how it works" to reduce the need to test everything. The tag model provides a "how it works" framework. Keep in mind there are multiple ways to emulate languages and there may be no single right "how it works". Often it's easier to remember a model than remember the outcome of myriad tests. (The production interpreter itself may be considered the "single" reference model, but it's not realistic to dissect it, and it likely has performance optimizations that may complicate grokking even if one did dissect it.)
- The "tag model" provides an erroneous "how it works" model. As a developer of interpreters and compilers, I know your "tag model" is not how it works.
- And granted, different people remember info in different/preferred ways. I don't promise that everybody will favor the tag model. I estimate some will like it, but it's only an estimate, just like any of your preference estimations about your model. AnecdoteImpasse.
- My "model" is not subject to likes or dislikes; it's simply how it is. It describes "how it works" based on actual popular imperative programming language semantics. And by "actual ... semantics", I don't mean your usual misunderstanding of semantics as "how typical programmers (mis)interpret language behaviour." What I mean by semantics is how programming languages actually work, based on the documented execution-time interaction of language parts like statements, operators, values, variables, types and expressions.
- Re: "It describes "how it works" based on actual popular imperative programming language semantics" -- Bullshit! You've failed to measure "semantics" in a way even semi-close to being rigorous. You mistake your personal feelings and personal English interpretation for objective "semantics" it appears. And most programming language documentation does NOT contradict my model. If you claim it does, cite it and quote it.
- Semantics don't have to be "measured" in a rigorous way to be clearly understood. Given that a language consists of defined and recognisable parts such as functions, variables, types, operators, procedures, literals, values, arguments, parameters, identifiers, statements, and so on, semantics describes how they interact, assuming we can form valid sentences in the language (i.e., assuming we have valid syntax). This does not require formal rigour, only reasonable documentation and an understanding of the terminology involved. As for language documentation that contradicts your model, it looks Python documentation is more in accordance with my descriptions than yours. See, for example, http://www.tutorialspoint.com/python/python_variable_types.htm
- I'm not sure what you mean by "understood". There are different ways to learn about and mentally model/work-with/forecast programming languages that all more or less allow one to program good enough to get a paycheck. That I agree with it. But I DO NOT AGREE THOSE ARE ALL THE SAME. There are different ways to skin the cat. And I don't see how that Python doc contradicts my model. Please quote specific passages, along with explanations of how they fit your model or don't fit mine, and please do it in a new topic, not here.
- Here's a contradiction between the Python doc and your model: "[...] numeric values [...] are immutable [...] which means that changing the value of a number data type results in a newly allocated object." It does, however, accord with my descriptions.
- Re "results in a newly allocated object" -- You called it a "value", not an "object". Nor did you describe this "allocation". I could call/alias my "value" attribute an "object" also without violating any universal rule since "object" is vague outside of the context of a given language. If everything formally "is" an object in a given language, including scalars, then BOTH our models should have an additional structure called "object". (Objects don't have to have "values" to be objects. They could hold merely pointers to other stuff. Thus, "value" or "representation" probably should not be hard-wired into "object".) I omitted such from the base model because we are modeling type issues of scalars (for starters), not OOP issues. However, model users are free to modify the model to better fit a given language if they want, at the cost of complexity. Further, that documentation may be "leaking" implementation-specific issues that otherwise are implementation centric and don't need mentioning or could be described different without being "wrong". At this, time I haven't studied that document and Python enough to really address that.
Further, the dynamic language designers themselves don't seem to give type issues much thought, creating unnecessary inconsistencies and confusion, such as overloading "+", the is_number versus is_bool inconsistency in Php, and the excessive number of screwy non-printable null-like "characters" in
JavaScript. -t
I suspect dynamically typed language designers have enough programming experience to not bother giving these negligible "type issues" any thought beyond documenting them as appropriate (e.g., the "is_number versus is_bool inconsistency in Php"), or they regard them as handy shortcuts (e.g., overloading '+') for experienced programmers rather than pitfalls for weak programmers. I can't justify the Javascript type system -- it is full of quirks -- but you get used to them. I suppose they're not surprising, given that the first implementation of Javascript was hacked together in 10 days.
I don't see how overloading "+" is a "handy shortcut". A different concatenation operator can take up just as little space (such as Microsoft's and ColdFusion's "&". Php's "." should have been reserved for object pathing in my opinion.) Using an overloaded "+" was probably just a pseudo-clever fad to make one feel they are skilled OO'ers, the big fad of late 90's. One can always claim that people confused by odd features are "weak", but one must ask if the "problem feature" is really necessary, or just a quirk of happenstance. I suspect most language designers did not give much thought and vetting to their dynamic typing design, copying "in" fads instead. That's human nature.
Overloading "+" is certainly a handy shortcut when you have various numeric types and wish to make it easy to intermix them.
Sure, for numbers, but not for strings. (Dynamic languages don't really need to make a distinction between Integer, Real, Double, etc. for their typical usage. Sometimes we need to check for integer-ness for form validation, but it doesn't have to be an explicit type. In fact, I'd argue that "max decimals" is a more flexible validation function/check, and can be used to check for (interpret-able) integer-ness. In my opinion they should use some kind of decimal-based computations as described in FloatingPointCurrency. It would probably be slower than the current approaches, though, and not act that much different from the current approaches the vast majority of the time.)
I think this is wandering well OffTopic.
Footnotes:
[1] The environments where dynamic languages are used are generally not "fragile" enough that occasional type-related confusion-caused bugs results in significant damage. Bugs are fixed as encountered. In a domain/project where lack of app bugs really matters, one should probably use a strong- and staticly-typed language. However, development productivity is often slower under such languages, and some argue they are not as flexible to change. Fastidiousness is not cost-free. The loose nature of JavaScript's typing makes it ill-suited as a SystemsSoftware language for web-GUI sub-systems, creating many of the browser GUI headaches out there. An explicitly typed language would be better suited for such use. JavaScript is sufficient as a "glue language" for browser objects/parts, but not the guts.
JanuaryFourteen