Objective Evidence Against Gotos Discussion

A continuation of ObjectiveEvidenceAgainstGotos.


Moved from KeepCriticismNarrow:

You are confusing personally thinking X is better with demonstrating it objectively better [such as personally preferring blocks over goto's versus proving it overall better]. The issue was "rigor", not happenstance agreement. You cannot even produce "rigor" for simple things most people agree on (except in terms of narrow metrics). Software engineering suffers DisciplineEnvy. (I didn't create that topic, by the way.) --top

You are attempting a YouJustDontGetIt attack? you're a wile little hypocrite, aren't'cha. Please KeepCriticismNarrow. Besides, you've offered me no reasons to believe that I'm 'confusing' anything here. I certainly did not bring up 'happenstance agreement' as a substitute for rigor, and I would agree that 'rigor' has little to do with 'happenstance agreement'; you certainly won't get type safety or algorithmic performance or security by means of 'happenstance agreement'. But your claim that rigor doesn't exist other than for machine performance is still bogus: such things as algorithmic performance and safety and security are, indeed, approached quite rigorously. Also bogus is your entire 'X is better' line of argument; "X is better for what?" and "can we profit from X for something we do?" are the only useful and meaningful questions, the latter admitting to the costs of including X when determining 'profit'. Any claim that someone should prove X better for all factors in all cases is either naive or extremely dishonest. And unreasoned personal agreement is irrelevant for issues that aren't up for vote, and psychology has yet to be proven relevant... has no known impact whatsoever on risks, speed, security, freedom to commit error during build or maintenance, costs to confirm correctness, consequences of potential error, or machine interpretation of a command. Nonetheless, one can rigorously prove by relatively simple logical analysis that gotos introduce greater risks than a simple while loop when you're using gotos to build a while loop: there are simply more discrete pieces programmers must get right, thus more opportunities to get things wrong during development or maintenance. (Admittedly a single while loop ain't bad... it's when you've got nested loops or multiple loops in a block that the complexity and inherently greater freedom to commit error starts kicking programmer and maintainer ass.)

Your rigor examples are too narrow to use as a primary decision factor. Or at least, you haven't shown them to be a reliable lone deciding factor.

It's an enormous, irrational, and dishonest jump to go from that position to claiming that rigor doesn't exist in software engineering. Rigor doesn't need to be applied to any particular choice or decision for it to exist. It just needs to be applied to any one choice or decision. Such as deciding whether code might crash, or whether a particular looping construct should be written with gotos rather than the language-supported loop that does exactly what you need.

Your rigor examples are too narrow to use as a primary decision factor. Or at least, you haven't shown them to be a reliable lone deciding factor. While-loops are rarely done in isolation such that showing that while loops have less parts for building just a while loop is almost a UselessTruth by itself. A good goto programmer may be able to combine what would be 2 while loops into a simpler goto arrangement for certain situations. (There's no evidence that goto's increase overall code volume. Thus, your "more discrete pieces" claim does not scale.) What is true in isolation may not apply to a live example. And just because psychology is difficult to measure is not a reason to ignore it. SovietShoeFactoryPrinciple again. Narrow rigor is not necessarily the same as thorough rigor or useful rigor. And I don't "expect" anybody to show all possible metrics. The more metrics and the more aspects they cover, the better. Your evidence gets a better grade. You have not even scratched the surface with your absolute claims, though. You over-magnify the worth of your few and narrow pet metrics. It's that simple. Anybody can find narrow metrics to "prove" their pet technology is "better" if they want.

I'm not 'over-magnifying' anything here. My claim is and was that if you're going to use gotos to implement a plain while loop, you're better off using the language-supported while-loop. Neat tricks that may or may not exist to combine loops with gotos wouldn't be the same situation, and thus would not be a valid counter-argument to the fact that while-loops are better than gotos if you're using the latter to implement the former be it in isolation or otherwise. And the fact is that I only need the one factor, even if it is minor. I've no need to magnify; by nature, the gotos are accomplishing exactly the same thing as the while loops, and thus are not offering benefits over it regarding functionality. That is how the situation was declared, and it isn't an unrealistic situation, either: for, while, and until patterns were among the first I learned as a BASIC programmer in my childhood (see AreDesignPatternsMissingLanguageFeatures and PatternBacklash). You seem to be looking for me to provide a reason to abandon gotos entirely from the language, but I feel perfectly comfortable saying 'you shouldn't use gotos here because...' and 'you shouldn't use gotos there because...' and 'yes, jumping to the cleanup code here is a good use of goto where the other conditional constructs would make for more awkward code'. I do use gotos in my code, but I still see ObjectiveEvidenceAgainstGotos for gotos applied for a variety of common purposes. I.e. there is objective evidence that gotos aren't a panacea, the best cure for every problem, which still qualifies as objective evidence against gotos. "Primary decision factor" is something I consider irrelevant - it's a dream you're chasing.

But still you are only looking at a specific situation and a specific metric. Again, I don't disagree that one can supply rigor to very specific instances for very specific metrics. You merely confirmed something I already agreed with. It's the extrapolation that's the problem. You seemed to agree that a skilled goto-ist may be able to combine and/or simplify the goto-related code for more complex code patterns.

I am aware of some patterns that aren't well encapsulated by loops or if-then conditionals and such, especially regarding error handling and some wait-loops while waiting on resources, so I've never been promoting kicking the goto out of the procedural languages that lack continuations (though in many cases I would prefer to replace it with full continuations). But specific situations - especially those encountered often - are all that matter... both when identifying evidence for and evidence against use of gotos. The same is true for any tool.

And the 'specific metric' argument of yours is simply indicative of you forgetting the various other metrics already examined. I reduced it to one relevant metric for this conversation even after mentioning such things as loop-unrolling-optimizations elsewhere. Besides, one specific metric is all it takes unless and until you identify a relevant counter-metric... which I see you do below regarding the complexity of language argument.

And one can still argue that Goto's keep the language simple - less tokens and/or less commands. Goto's can emulate while, for, until, case, etc. blocks all using the same building block. It's a matter of weighing this single benefit, "conservation of commands", against others. As soon as you add explicit while-loops to the language, the conservation score goes down.

Yes, one can argue that providing gotos but not loops keeps the language minimal and therefore 'simple' (and heading further in this direction is the TuringTarpit and one instruction set computer). But language simplicity isn't of primary relevance; you can't even reasonably argue it reduces learning curves: simple languages simply force programmers to learn and utilize many patterns (while, for, until, case, etc.) to get useful work done. Programs and data, constructs built in the language, are what should be kept simple (within the limits of EssentialComplexity). These programs are made more complex, more expensive, more prone to breaking during development or maintenance when you must implement and maintain unrefactorable design patterns to get useful work done (i.e. DesignPatternsAreMissingLanguageFeatures). This is once again the huge difference between simple and simplistic - between pay up front or pay out the back, simple implies the former and simplistic the latter. Of course, if the language came with another orthogonal feature that allowed programmers to encapsulate and refactor 'gotos' into while loops and for loops and what-have-you patterns-of-the-project, then you could make a very reasonable argument that having language-primitive for loops and while loops and such would be of very little profit (because you could refactor those patterns into a library). RealMacros might qualify as such an orthogonal language feature.

That's the point, we are weighing factors against factors. And "learning curves" depend largely on psychology. You've un-argued yourself again.

Keeping the language simpler, however, is not a valid factor for concern of whether the language is better for programmers, who only benefit from keeping the programs simpler. Nothing in all discussions prior to this implied the question of whether gotos were better was about whether it is better for the language implementers. You can't just weigh factors against factors, TopMind. You need to choose valid factors FOR a particular goal. better for what is the only relevant question, and yet you keep attempting to dodge it. Anyhow, when you claim 'psychology matters' it has a certain connotation that I don't believe applies for learning curves. Learning curves, while studied in psychology, are not at all 'subjective' which is what you imply by saying they depend on psychology. If you did intend the latter, I don't believe you at all. If you meant the former, take care to avoid equivocation, and I'll note my own skepticism: what I've read of learning curves (in highschool AP Psychology and college Behaviorism courses) mostly indicated dependence not on the individual so much as the subject matter.

I didn't mean for language implementation to be an issue. There is less "language" for a language learner to learn and remember if there are fewer language atoms. Old-style BASIC was popular largely because it had few atoms. True, it didn't scale well, but so far the reasons have been difficult to rigorously quantify. Whether fewer atoms makes the language "better" or not depends on how much weight we put to each of the factors/metrics.

And what I was stating above is that the argument that your argument (which I apparently anticipated quite well) doesn't hold any water when it comes to arguing about learning curves. You can't reasonably make that argument because, if you need to solve the same set of problems, a language with finer language 'atoms' simply shifts the learning burden to the construction of patterns using those atoms and use of the libraries. The learning curve for the programmer is still very much there; it doesn't diminish for smaller languages. And I doubt your statement regarding BASIC's popularity is more than sheer conjecture.

But that's an observational anecdote, not something quantified and properly weighed against others. While I tend to agree with it, such agreement does NOT make it rigorous science. Further, we don't have enough info about whether most goto's must or did follow the patterns we now call while, for, case, etc. There may be a fair number of worthy code-saving shortcuts that gotonauts used that the block-based set lacks. --top

You're attempting to leverage a bit of your own conjecture (top hypothesizes that smaller language = less to learn) against an observational anecdote (from someone who has both learned and taught the subject). You really should check out the relative postions of those on that EvidenceTotemPole you devised. Regardless, my stated claim (the one you made so much noise rejecting) was that you cannot make a reasonable argument about learning curves - which you must concede to unless you can provide this 'rigorous science' you're claiming I lack. And info is readily available on how gotos were used in the day they were the most common control mechanism, but I guess you're not enough of an EternalStudent to have read ye'old programs written in the back of those old computing magazines. In any case, code-saving techniques and various other PrematureOptimizations? remain of very questionable value to the programmer. Where's the science that determines 'worthiness' of code-saving shortcuts?

You keep incorrectly dancing back and forth between the forest and the trees to suit your argument. Most would agree that a language with less parts is easier to learn with all OTHER factors being equal. However, they are not equal in practice. It's like judging a basketball player only on their "assists" stats. If they are a poor shooter or poor on defense, then assists may not make up for those other factors.

The forest and the trees? If I'm dancing between these metaphorical things, whatever you presume them to be in this situation, I'd imagine it's because they BOTH support my argument and I'm just trying to hit all relevant points. All 'OTHER' factors being equal includes the programs you need to write in the language. When you have a language with fewer parts, I'll agree that it's likely easier to memorize all 'parts' of that language... but having memorized the parts does not mean you've learned the language. UnLambdaLanguage has, for example, just five 'parts'. It shouldn't take you more than a few minutes to memorize them all, and the language is TuringComplete. But there are only two proofs that you know a written language: first is your ability to express things in it, and second is your ability to comprehend such expressions. I very seriously doubt you'll be able to express a solution to the OddWordProblem or comprehend such a solution merely by virtue of knowing the three 'parts' of the UnLambdaLanguage. All OTHER factors being equal, I seriously doubt that people who actually think about language and its purpose (which apparently excludes YOU) would readily agree that a language with fewer parts is easier to learn - there is simply no evidence to indicate so, and there is plenty of reason to think otherwise.

That may be an extreme example. One can prove just about anything with an extreme example.

As a person who distinguishes statements as being 'true' or 'false', I'm inclined to believe that if a statement isn't valid at its extremes, it isn't valid PERIOD. Forall X P(X) can be disproven by finding any one X such that ~P(X) is true. Though X might be categorized as an 'extreme' example, it remains a sufficient example.

The GOTO system in BASIC was generally simple, intuitive, and easy to learn. In my personal opinion/observations, there is often a "sweat spot" where there are just enough language or tool atoms but not too much.

Most language designers, including myself, believe the same. The EinsteinPrinciple is a guiding principle of language design - AsSimpleAsPossibleButNoSimpler. But that is not an argument that supports smaller languages being 'easier to learn', but is more an argument against unnecessary complexity (e.g. exceptions to orthogonal properties, unusual syntax, etc.) and against undesirable simplicity (e.g. MissingFeatureSmell (inability to refactor either entirely or without leaking implementation details) and other LanguageSmells).

In many systems, IT or otherwise, cow-towing to any one factor tends to skew or mess-up the whole thing. But just because a factor can do this if cranked all the way in either direction does NOT mean it is a "bad" factor. MOST factors will probably mess up the total if cranked all the way up or down.

For example, if "cost" was the overriding factor for rockets, then reliability, safety, and environmental issues could be sacrificed. Few would welcome such dedication to cost at the expense of these others. Let me ask again just to make sure. Do you agree or disagree: "All else being equal, is a language with fewer atoms better?" Your example failed the "equal" part because intuitiveness was harmed by having too few atoms. Cranking the atom knob too low appears to have hurt "intuitiveness" for that sample. (There may be simple languages that don't, but none have been presented here for testing.) --top

I'll disagree. If all else is equal (including 'intuitiveness' and 'learning curves' and 'optimizability' and anything else you care to name), having fewer atoms is NOT better. There is no intrinsic or immediate extrinsic value to programmers in having more or fewer language atoms, and so all such value must logically exist within the 'all else'. Of greater concern (under such guides as EinsteinPrinciple) is whether there are any 'unnecessary' atoms, such as atoms that could be entirely refactored into a library by use of the other atoms (see notes on LanguageIdiomClutter).

I was hoping to find something we agree on. But if you don't agree on the "fewer atoms" I won't push that issue for now. But you still have not shown sufficient rigor. Your arguments above still depend on human psychology. The computer does not "care" about human-invented idioms.

Please clarify: for which arguments, in particular, have I failed to provide sufficient rigor? and which ones, in particular, depend on human psychology?

This one, strictly as stated, was proven with sufficient rigor (and without dependence on human psychology) in the original ObjectiveEvidenceAgainstGotos page. You attempted to imply I was stating more than what was literally written, but you were in error to do so. --BlackHat

Well, I missed the "sufficient rigor". I only saw anecdotal assumptions about what trips up people and what doesn't. While I may agree with some of these observations as behavioral tendencies, I don't have "rigorous" proof for them. --top

Quote: "Assuming everyone makes errors (which is true in all observations thus far) the potential error for a 'goto' statement is the full collection of labels accessible from the goto statement's context (which, depending on the language and compiler, might be a single routine or the whole program). Thus, when dealing with common classes of 'goto' patterns (including loops), avoiding use of 'goto' can automatically avoid a variety of potential errors." The potential to be tripped up is NOT subjective and does NOT depend on psychology, and doesn't require any more rigor than it takes to point out ONE example that humans can be tripped up (one example IS sufficient rigor to prove possibility and non-zero probability). Risk is determined by potential for and probability of error. All the above conclusion requires is that gotos introduce a potential to be tripped up that does not exist with the simple while loop - i.e. a non-zero probability of error vs. a zero-probability of error. In this case, said probability exists at the point one applies the 'goto' to return to the label indicating the top of the goto-implemented while-loop: using gotos, one can potentially goto the wrong label. This qualifies as 'objective' 'evidence' against gotos... admittedly in a rather narrow scope, but evidence never needs to stand alone: evidence for any assertion is something that accumulates. --BlackHat

We've been over this already in ObjectiveEvidenceAgainstGotos. One can also mis-nest block-enders in blocks. I've done it myself multiple times. --top

You can't mis-nest block enders and remain within the programming language. Doing so introduces syntax and parse errors. This is also something "we've been over already in ObjectiveEvidenceAgainstGotos". --BlackHat

For one, you are making assumptions about the specific capability of a language. Second, one can still mis-nest stuff and have it compile if the total ender count is the same. Third, I prefer "end-while" instead of "}" because its easier to visually match. (And loop enders can still be mixed up with it.) These preferences are personal preferences. We cannot assume they are universal without rigor. --top

For one, they are correct assumptions about the capability of any formal language with block structures. Second, if the enders are identical and the ender-count is the same, then one has not mis-nested the language: the error IS NOT one of 'nesting'. You might commit an alternative error of placing the end-of-block in the wrong location but that has an identical error for gotos: putting the end-of-loop goto in the same wrong location, and, as gotos are at least equal sinners to block control structures in that regard, arguments along that line in defense of gotos will be fruitless. Third, your whole 'third' point is irrelevant to the argument, which is independent of concern for "end-while" vs. "}" and such things as "visual matching" and "preference". --BlackHat

I'll revisit the first issue later. An explicit label may reduce the chance of mis-matching the enders of different blocks/loops. All the block-enders being the same increases the chance of mistaking one for the other when writing or inspecting code. The eye cannot tell one "}" from another "}" without studying the nesting and context, usually via indentation. (One can indent goto'd loops in most languages also, by the way, although it was not often done in practice, perhaps because the column width was limited in older languages.) The probability of one human error over the other depends on human psychology. --top

I'm not making any arguments for or against labeled block-ender structures and their possible merits towards probabilistic reduction of human-parse errors. I'm quite aware of the possibility for labeled blocks with labeled terminals, but such discussion is simply irrelevant to the above argument. I think it worth noting for your edification that something like if(C) BODY1 while(D) BODY2 end-if BODY3 end-while would be identified as a parse error for a block-structured language. The potential for this parse-error is what breaks any analogy one might attempt to construct between labeled block terminals and gotos. --BlackHat

But there are potentially other while-loops in which to inter-confuse oneself with. Second, that's only true if we assume "end-x" style block-enders. The most common block languages use "}" such that mentally confusing an if-block-ender with a while-block-ender is much greater. If your argument only applies to end-x style blocks, then please clarify your narrowing of claim.

The above argument does not depend upon or apply only to end-x style blocks which, as I've stated again and again and again, are completely irrelevant. The potential for going to the wrong label at the end of a goto-built 'while' loop is eliminated even if you use what you presume to be the confusing "}" as the only block terminal for a language-supported while-loop. In a more general sense, your whole line of argument is irrelevant: the issue of which block-terminals most reduces confusion among human readers is simply of no concern to the question of blocks vs. gotos. By analogy, you're quibbling about the color of the wallpaper when the discussion was about whether we should erect a wall. --BlackHat

I invite you to flesh out statements such as, (quote) "said probability exists at the point one applies the 'goto' to return to the label indicating the top of the goto-implemented while-loop: using gotos, one can potentially goto the wrong label." (end-quote) What entity exactly is performing the action of "goto the wrong label"? Is it human? Or something else? If human, how is the "wrong"-ness not about "confusion among human readers [including code typists]"? --top

Sorry for the confusion. It is the human programmer that has the potential to specify that the program shall goto the wrong label when writing the code. This is the error and risk that does not exist for otherwise equivalent block-structured code. Concerns of readability are not relevant to the argument. Concerns for greater or lesser non-zero probability of error caused by degree of confusion are also irrelevant, as the argument concerns itself only with "potential" for certain classes of errors (zero vs. non-zero probability). --BlackHat

Humans don't "go to", the computer does. What you really mean, if I interpret it correctly, is that a human being mis-matches the GOTO with the label in their MIND and thus keys it wrong in the code. The resulting code pairing does not reflect their MENTAL intention of how the paring should have been. However, the same human may also perform the error of mismatching block enders, such as "}". I don't see how you can count one without counting the other. --top

Humans and computers are both readers of code, and both 'goto' labels while reading or understanding code. But, as noted above, the argument concerns writers of code and the potential to commit error when writing goto statements, not readers of it. For example, when writing code there is no potential for mismatching block-enders such as "}" - if you accidentally flip "}" with "}", there is no error and the code looks exactly the same. If you put "end-x" instead of "end-y", or forget a block-terminal, you've a parse error (technically outside the language) instead of a logic error. It is true that parse-errors are still caused by human mistakes influenced by human minds, but the potential for parse-errors may easily exist whether or not you use gotos, and so cannot be qualified as a point for or against gotos... and so it would be a mistake to 'count' it for this argument. --BlackHat

I am not talking about flipping the same bracket with another same bracket at the same spot. I am talking about mistaking one for another at a different *position*. I've made that mistake multiple times when working with C-style languages. It does happen, and even can compile depending the nature of the mistake. --top

Ah. So was I. What a coincidence. I must admit to some curiosity as to how, when writing C-style code, you flip "}" at one position for "}" at another position and call it a "mistake". --BlackHat

Labels cannot be identical (at least within same routine) such that one cannot confuse one same-looking label with a different same-looking label and pass the compiler (since you are using compiler checking as an assumption, which may be flawed, but I won't challenge it right now). Now it is possible to confuse similar looking labels, but that is a different kind of error. One is positional/context confusion, one is label (block-ender) content misreading. --top

Labels not being identical is what introduces a potential for error when writing code. If you're using block-structured code, this becomes a parse-error (because the parser can recognize the mistake - doesn't matter whether a particular parser bothers doing so). If you're using goto labels, this becomes a runtime error (because nobody - compiler, maintainer, etc. - can recognize the mistake by looking at the code alone). Whether the labels look similar or have a hamming distance of 22 is irrelevant to the potential for error because human errors exist for more reasons than just typographical mistake. --BlackHat


Example 637:

  // Version 1
  while (x) {
  ...// loop body
  // <--- forgotten loop block closer should've been here
  ...if (y) {   // note incorrect indentation
  ...if (z) {
  ......// if z body
  ...}
  } // <--- spot "E"

One gets a syntax error message regarding block nesting. They then visually review the code. Being that spot "E" looks like the proper end of the loop, we make some adjustments to the IF statements to put the looper ender at the (bogus) new location:

  // Version 2
  while (x) {
  ...// loop body (per original intent)
  ...if (y) {
  ......if (z) {
  .........// if z body
  ......}
  ...}
  }

In short, our indentation mistake at "if (Y)" mislead our eyes.

(Dots added to prevent TabMunging)

Please clarify how these above examples are of any relevance to a discussion involving gotos vs. block-based control structures. Forgetting to place a critical part of the loop, or mucking it up when attempting to repair it because you've forgotten what you intended the code to do by the time you attempt to parse it, are hardly unique to block structures... but only the block structures give you immediate syntax errors upon doing so. Indentation errors, too, are not a goto vs. block error.

Labels are not the same as each other such that its easier to check what matches what without the indentation. If the indentation is off, curly brackets create more confusion because they are all the same.

Which, again, is NOT a block vs. goto issue. There are languages with named blocks if you wish to reduce human confusion on reading block-terminals. These languages still enforce proper nesting. They also tend to allow one to 'continue' or 'break' out of other than the immediate loop.

Humans depend on indentation to visually match them, and most rational people will agree with me on that.

Still again, NOT a block vs. goto issue. People freely indent goto-based code, too.

I'm not saying labels don't have their own problems, but block braces are not confusion-free. Thus, one is forced to compare one set of mistakes against another set mistakes. Your Boolean interpretation is flawed.

Only half correct. One is forced to compare a set of mistakes against a superset of mistakes which, while qualifying as "another" set of mistakes, is still easily comparable.

In version 1, we'd have a label that we could check against existing IF's (such as "if(!y)goto foo") or loops. It adds an extra layer of protection that braces lack. Put another way, labels document intention better in some ways. We could draw lines (manually or IDE) between the labels and existing IF-GOTO's and see that the bottom label is "taken". (Yes, labels can be shared, but at least we know what is already using them.)

Labels document intention no better than putting comments in the code. The content of variable names, at least in most languages, have no meaning to the code - i.e. precisely the same amount of meaning as the content of comments. As consequence, they also add no more protection than comments.

Incorrect. Even if we only use numeric labels, they still carry info that braces lack.

Read more carefully. They might carry info that braces lack, but NOT to the code. The content of labels have no more meaning to the code than do comments. And, if you wished, you could always put comments on braces. "} // end while(characters left)". That offers all the same protections that labels do, and still eliminates classes of errors that gotos allow.

Randomly adding or subtracting a brace can change the nesting. Adding or subtracting a random label will NOT change any existing GOTO destinations or labels. Thus, they are more "robust" in some ways. (Yes, the compiler may catch a bad brace, but it may not tell us what writer's intent was.) --top

You should add or subtract braces by associated pairs, not just "a" brace. And removing a random label certainly requires removing all associated 'goto' as well. And the problem of missing code not indicating intent is hardly unique to brace code - missing code is, by nature, never available to compiler or maintainer to indicate intent. I'm curious as to exactly how you're determining "robust" such that the gotos are more "robust" than the brace code. --BlackHat

A random change in braces can throw off the nesting of multiple blocks. Labels are more explicit because "from" and "to" (for the most part) don't depend on context (nesting). "Corruption" of the code text may make a "from" without a "to" or vise versa, but the damage is limited in scope. Brace corruption can knock many adjacent blocks into a confused state.

You make many suspiciously unjustified claims. I expect code is at least as corrupted by missing a 'goto' as it is by forgetting to end a loop, but with the goto approach you won't receive any help in locating the error. By what do you justify you claim that "the damage is limited in scope"? And what do you consider a "confused state" for a block? --BlackHat

The help you receive in locating the error depends heavily on the language. A language such as Lisp may be more difficult for a machine to figure out where the problem is because its grammar is simpler than say C's, leaving less grammatical landmarks that complex syntax tends to have.

The degree to which you receive help does depend on the language, I agree. But the factor of help here is on the order of infinity between blocks and gotos.

The labels are the "help". The jump-graph points have explicit unique identifiers. It already does what a compiler can only guess at with blocks. There is no need for a compiler to guess what is already explicitly stated with labels. It would be redundant.

Sigh. You can't justify labels as "helping" in the same situation you're setting up for block-bracketed languages (random alterations). The language/compiler/interpreter/maintainer/IDE can't support you AT ALL when the labels you goto are either explicitly wrong, in the wrong location, or not there at all, which correspond to various errors you've been attempting to arrange for braces. For blocks with braces, among these cases the language won't support you only if you place the braces in the wrong locations. --BlackHat

With nesting, one blow can knock out the "alignment" of entire branches because nesting is context based. Goto labels are for the most part not context-based such that a mutation tends to be localized in its damage. But anyhow, you are comparing apples to oranges. Unless we agree to a classification system for error "kinds", this is going nowhere.

If you're only talking about "damage" to the syntactic structure and ability to parse (as opposed to damage related to creation and promulgation of error), your statement is reasonable. However, such a narrow scope somewhat obviates the value of your statement. It has been observed (rigorously, with statistical studies available as performed for promotion of CMMI) that the cost to repair a software error increases by orders of magnitude as a project moves forward in its lifecycle from design to development to deployment and maintenance. As consequence, making "damage" obvious as early as possible is (long term) the cheapest way to program, offering the best chances to keep the project under budget. This observation drives, among other things, use of UnitTests and TypeSafety. Keeping all this in mind, the sort of obvious damage caused by failures of syntactic 'alignment' seems to be a GoodThing: 'obvious' damage can be detected very early, likely even by an IDE with syntax highlighting and brace matching while the code is being written and long before the code is passed to an interpreter or compiler.

Certain causes of "damage" resulting from corruption of gotos can be similarly detected, such as going to a label that is does not exist due to a spelling error or some other cause. But there is great risk of many goto errors (going to the wrong label, forgetting to goto a label at what would be the end of a block, etc.) surviving until later. These are errors that would result in the sort of obvious "alignment" damage you describe if you attempted to create these errors with block-structured control. These goto errors cannot be detected by compilers or interpreters though I do expect redundancy introduced by unit tests or DBC or good comments for maintainers could help ameliorate them.

If this is going to turn into yet another "strong-typing saves the world" debate, then I'm done here. Your lines of evidence are growing longer (more links required), more complicated, and more anecdotal. The "rigor" is gone. (Rigor was the original goal.) Your initial "zero versus non-zero" metric is out the door. It's now an opinion-fest. And your CMMI argument appears tied to the ability of the developer to notice, find, dissect, understand, and repair the "error". We are back to psychology-land. CMMI repair costs being based on rigorous studies does not by itself tie goto's to CMMI. A rigorous argument is as strong as its weakest link. --top

None of your statements above are valid. I was not going into the merits of strong typing. Long lines of evidence don't introduce dependencies, just more evidence. The "zero versus non-zero" metric still stands, exactly as it did before, for certain classes of error - i.e. the set of errors one can make with gotos remains a superset of those one can commit without them (while remaining within the language). The CMMI data is about costs to repair errors as observed in real life, but the 'ability' to notice, find, and dissect the problem is an information-theoretic problem, not one merely of 'psychology'. Psychology remains a term you utilize for excessive equivocation, since you are not careful to discern information processing and computation from subjective opinion. At this point, I cannot consider you a reasonable debater.

Enumerate your alleged error sets, please. And its not a superset because if random code is interspersed into a program (such as a paste-typo), labels carry more information to help put the code back the way it was. Its easier to "throw off" nesting than it is labels. The "from" and "to" ID's provide info that mere nesting does not. Nesting blocks depend on context, and random code damages such context more than it damages unique position ID's (labels).

I named a particular class of errors: at the end of a loop, going to the wrong location. Period. And with gotos, you are capable of all the same errors as with blocks because gotos can exactly represent every block error (due to their wondrous flexibility), but not vice versa. Your statements about labels helping one put the code back the way it was have NO basis in reality; if anything, it is impossible to tell (without comments) that the bad label or bad goto wasn't supposed to go where it says it goes. Errors that occur due to bad placement of block endings are just as easily performed with bad placement of gotos.


PageAnchor: Middle_Mutate

Suppose we split a basic IF statement into a beginning, middle, and end. Now suppose the middle was corrupted with random code (paste error). The middle excludes any parts of our IF construct. With labels, we know that either the jump out will still work, or there will be a syntax error (such as "duplicate label"). With nested blocks, we have no known guarantees. Thus, we have some guarantees versus zero guarantees for this scenario. One is greater than zero.

Sigh. Again you present an unrelated (AKA 'Straw Man') argument in an attempt to diminish the rigor of the one provided earlier. In any case, even if it were related, your argument is incorrect. With nested blocks we do have known guarantees, such as the resulting structure must be balanced lest you result in a syntax error. Second, with goto code and nested blocks each, you're not getting any guarantees about behavior... certainly not about the 'jump out' still working. It won't work, for example, if it isn't being reached. Making arguments involving 'corrupted with random code' is a futile effort, but I do commend you for your valiant attempt.

I forgot the separator because domestic chores rushed my pasting, I apologize, and added one. Being "balanced" does not guarantee correctness, and is a separate issue anyhow. And you are wrong about the jump-out. We know that will still work. But if blocky inserts end one block and start another (Example: "}while(true){") , then it can change the jump-out point. No changes in the central part can change the jump-out of the goto version. Well, I suppose we could close out the whole function and start a new one with the central portion if labels only have to be unique within a function and block-based functions are used, but that is doable with blocks also. It's similar to your "superset" argument. --top

The problem with your assertions is that an insert of some equivalent to "here: goto here;" or "while(true){}" or "crash now" will break any code. One could even randomly insert code that will simply force the condition for breaking the loop to be false. How (and whether) any segment of code works depends in part on its preconditions. Dead code doesn't work... doesn't even try. Thus I maintain: making arguments involving the behavior of code 'corrupted with random code' is a futile effort. And I never claimed that being "balanced" guarantees correctness. As noted, guarantees of correctness (which is behavioral, not semantic) are futile when dealing with 'corrupted with random code'.

But "here: goto here;" and "crash now" won't even execute if the condition is false (jump around), so it does not matter if such crap is put in the middle. No branch-related inserted code can "bust" the "not if" of the goto. However, branch-related inserts *can* bust the "jump around" condition flow. Thus, it has a small advantage in that regard that blocks don't. Yes, its small and nitty, but so is your "wrong label" metric. I'm just fighting nitty with counter-nitty to ruin your "subset" claim. Now you are forced to compare apples to oranges. --top

  if (! foo) goto jumpAround:
    // ...
    crash_and_explode();  // inserted code NEVER EXECUTED if not foo
    // ...
  jumpAround:    // label

if (foo) { // ... } while(true){ // inserted code, changes ending of IF // ... } // original ending of IF, now it's the ending of While

It's interesting how you keep implicitly denying the existence of labeled blocks. If I were to deny the existence of labeled gotos, forcing you to goto line numbers, your code would also break. Apples and oranges indeed. However, I do comprehend the point for which you are attempting to present evidence.

What? Labeled-blocks are half-ass goto's. If one can invent a language to get around whatever weakness one presents, this will never end.

Statements like that, TopMind, are evidence of your willful ignorance. I've been bringing up labeled blocks (which are certainly NOT 'half-ass gotos') since the start, but you've got your blinders on. If one chooses to cherry-pick what evidence they will accept, they can (and will) prove anything they desire... it is easy to prove that all objects are red if you only allow red objects into the evidence set. 'Objective evidence against gotos' should, by nature, allow ALL possible language features for comparison... blocks and labeled blocks among them. Anything else is cherry picking.

If you introduce labels, then you are up against your very own "wrong label" metric.

Incorrect. Labeled blocks are still syntactically required to balance, but must balance precisely with the labels. If you put the "wrong label", you still get the syntax error.

The fact that there are so many different ways to build branching mechanisms is not my "fault".

The fact that you use cherry-picking and often seek to ignore evidence that is validly within the scope of discussion, however, IS your "fault".

If it complicates your "rigor" attempt, thats your own problem. "Rigor" may require a lot of work to rule out different possible logic/proof paths. Calling your opponent "willfully ignorant" repeatedly is not a substitute for real work. --top

Your fallacy should be dismissed, not given weight. It doesn't complicate my "rigor" attempt except in the minds of those confused and fooled by straw men and cherry-picking. That you persist in providing it is not a substitute for cogent argument.

But it does not apply to common languages in use. That counts against "rigor".

This seems another nonsense statement to me. Can you establish that "rigor" for an argument about language statements in general is somehow dependent upon the "common languages in use"?

The fact that a language can be made to get around a problem has little to do with the existing language situation. And it makes the language more verbose. And, it is a different animal than what I had assumed the debate was about. Labeled blocks are a hybrid between goto's and blocks, and thus perhaps shouldn't even count.

It doesn't make the language more verbose than the gotos do. And labeled blocks aren't a hybrid between gotos and blocks anymore than conditional blocks themselves (including loops and function calls and if-then statements) are a hybrid between gotos and conditionals and stack management. Saying it shouldn't count without firmer reasons than its 'hybrid' nature is a form of hand-waving and cherry-picking that puts your entire line of reasoning on a slippery slope into the FallaSea.

You haven't presented rigorous info that it "should" count, only personal opinion. It would not apply to C-style languages (or any of the other common languages), which are the predominate form of blocks today. (And I disagree with you about the verbosity issue, but will save it for later.)

Everything "should" count in "objective evidence against gotos" so long as it isn't semantically and identical to the 'gotos' and somehow replaces a 'goto' and provides some objective benefit by doing so. That seems pretty logical and straightforward to me; I assumed any rational person would see it the same. That's why (gotos + stack manipulation vs. function calls) is a perfectly legit line of argument, TopMind, even though you're cherry-picking by erecting "don't go there" zones all around it.

Please clarify. Simply put, because labeled-blocks aren't exactly gotos, and because they can replace some gotos, they count within this topic. Period. Same goes for function calls and subroutines. Arguing they don't count would be cherry-picking and therefore fallacy.

And the C-style languages issue still seems irrelevant to me: you have yet to establish that "rigor" for an argument about language statements in general is somehow dependent upon the "common languages in use".

I did not claim rigor, so its not my burden to show that all potential block approaches can or can't be "counted".

Technically, there is no "burden" to "rigorously" show that any given bit of evidence should or shouldn't count. Requiring it is a sleezy form of demanding an infinite burden of proof by the person presenting the evidence: "have you rigorously shown that the evidence counts?" "I don't think your rigor is good enough; can you rigorously show that you've rigorously shown that the evidence should count?" It's ridiculous. It's exactly the sort of behavior I'd expect out of a sophist. Merely having 'reason' to believe the evidence isn't 'irrelevant' or 'invalid' is enough to require its consideration until such a time as someone proves it irrelevant or invalid.

But rigor does require some iron-clad reasons for what language features to include and exclude for the comparison. That fact that you have to resort to basterdized blocks to defend your "superset" claims suggests you are in a bind. Why can't you do it for plain-old C-style? Your "evidence" suggests there's an upside to labels. Imagine that!

My claim has never been about the "upside" or "downside" to labels because LABELS ARE NOT GOTOS. Your attempts to unite them are invalid for whatever point you're attempting to make.

Gotos also don't require labels; gotos can utilize line-numbers. Now, if I recall correctly, your overarching argument is that no solution can be objectively proven 'better' than any other solution. It seems to me the best way to prove this would be to take what seems to be the worst possible solution you can imagine, and prove that it isn't objectively worse than anything else. So why do you resort to gotos with labels? Your line of argument suggests there's an objective "upside" to labels. Imagine that!

"Goto's versus blocks" was an informal statement that did not exclude labels as part of "goto's". We probably should have cleared it up up-front to avoid ambiguities and definition drift. We could invent a language that required nesting rules for goto's, giving us some of the advantages of nest-checking, but remain goto's. Nesting rules are not automatically excluded from a literal "goto" also. We could probably invent all kinds of cat-dogs here if we think about it enough and this will never end.

For example, we could invent a language where the number of used labels and number of goto's is pre-limited:

  if (foo) goto 123 label_limit=3 jump_limit=1
  //...
  123: //...

The "label_limit" clause tells how many referenced labels can be between the statement and the label ("123:"). The "jump_limit" tells how many Goto's are allowed between the statement and the label. This would increase the ability of the compiler to detect errors/mutations in the code.

True, and I'd fully accept that as useful discussion if the goals of our interaction regarded ways to enable greater analysis by compilers. But I must keep in mind that in this discussion, arguments in favor of some minute subset of 'goto' implementations doesn't qualify as an argument in favor of 'gotos' in general. Vice versa, I'm free to take the most general (e.g. completely unrestricted, unscoped, unlabeled) gotos when choosing to present ObjectiveEvidenceAgainstGotos, and, because they are still 'gotos', it all counts.

You should pay attention to the sacrifices you're making to attempt to win these battles, Top. You'll completely lose your war - in this case, the notion you're promoting that no solution is (overall) objectively better or worse than any other solution. Your argument suggests you should be willing to defend that even the stupidest of solutions is objectively neither better nor worse. E.g. you should defend that unrestricted, unscoped, unlabeled (line numbers only) gotos + explicit stack manipulation to perform function calls is not objectively worse than explicit, language-supported function calls. That "any other solution" component of your overall argument is what lets me bring up or invent as many "cat-dogs" as I wish (though I've chosen to bring up only solutions that do have implementation, lest I wander too far into speculation), yet ties your hands to prevents you from usefully inventing or raising "cat-dogs". You can do so, but doing so requires you abandon defense of your overall point.

You had to hang off of labeled blocks to get around the issue above. Thus, who is "making sacrifices"? The "H" word comes to mind.

Not me, Top. I didn't "need" to hang off labeled blocks since your entire argument was invalid for gotos in general (which include unlabeled gotos). You raise a bunch of irrelevant arguments and I point out a way around them using a perfectly valid example of labeled block structures doesn't mean there was ever a valid "issue" at all. Mostly, there's just you... waving your hands... raising straw men... cherry picking... deluding yourself into thinking you've made a valid argument, and then making noises about how 'confident' you're feeling.

Around "It's interesting how you keep implicitly denying the existence of labeled blocks", you had to introduce labeled blocks to "pass" the metric I was using. Do you deny this?

I pointed out right at the top that your "metric I was using" was a straw man. You seemed okay with that, and simply broke off another section. I still consider the metric you are using irrelevant to the point you are making: i.e. even if I said, "Yes, Top, you are absolutely right!" it wouldn't be a victory for you at all. Might as well be claiming the sky to be blue as some sort of defense of your gotos argument. I merely informed you that if you're talking about some particular property of 'labeled' gotos that you can't guarantee with 'unlabeled' gotos or 'unlabeled' blocks, then it is a property of labels... not of gotos... and that labeled blocks would also have that property. The whole side discussion is irrelevant, but you keep fighting and fighting nonetheless as though it were critical to your argument.

If you can add odd stuff to your definition of "blocks" to pass tests, then I can do the same with goto's.

Sure. But you lose if you do so. You've taken a defensive position, defending gotos. That means I get to choose where to attack, and you've got to deal with it... introducing a bunch of "what ifs" and saying "look! I've fortified this position with some speculation! attack me here!" is silly. Thinking you've "won" something by doing so is just stupid.

Seems open-and-shut here. Without a limiting/constraining definition, it seems we are both able to over-engineer blocks and goto's to help them pass each other's little tests.

The thing is, for you to win your argument, you can't "engineer" gotos at all. If I attack unrestricted, unlabeled gotos, and you refuse to defend it, that implies you believe that labeled, restricted gotos are objectively better than unlabeled, unrestricted gotos, and therefore you've lost your overall argument.

And what do you mean by "any other solution component"?

Read again the sentence starting with "You'll completely lose your war", except this time don't ListenWithYourAnswerRunning.

I am not sure why you brought up function calls. Please clarify that portion. Its as if your mind made a U-turn, but your keyboard didn't keep up.

Since you can't recall the context after a few measly months, I'll remind you: in the original ObjectiveEvidenceAgainstGotos page you threatened to refuse conversation regarding evidence against unrestricted gotos and the use of gotos + explicit stack management (i.e. hand-implementing subroutines and function calls). I bring it up as just one example, among several, of you attempting to restrict the scope of what you mean when you discuss 'gotos', and thereby diminishing the value of ObjectiveEvidenceAgainstGotos as a testbed for arguments about things being objectively better or worse in general. I.e. you're essentially arguing: "let's use gotos as a testbed for evidence against gotos in general, BUT I'm going to refuse to listen to discussion about anything that sounds to me like a bad way of using gotos." Sure. Defend your "good goto methodology", leave the other ones to flounder, and keep pretending you've successfully defended your point that no solution is objectively worse than any other.

We are both guilty of not scoping the debate before it started. But in this case I was only doing what you were doing: adding features to goto's/blocks to work around a test. You added labeled blocks despite them being rare in practice. So, I added features to get around a test also.

Thing is, it's okay for me to do it. Bringing up labeled blocks is not inherently different than bringing up blocks in the first place; either can be used to attack. However, it is not okay for you to do it. Bringing up gotos-with-features does not allow you to defend gotos-in-general. The difference in our positions - attack vs. defense - prevents you from validly utilizing certain tactics. If you attempt to use them, your whole line of argument will be invalid and irrelevant.

Please clarify "goto's in general" and your attack/defense classification. I am just trying to deflate your "superset" claim here, not prove "net better".

I'd suggest if you reference something outside of the current topic to give it a PageAnchor with a name. Your jump was rather abrupt.

If you want to limit this to a certain kind of goto versus a certain kind of block, then please state it now (for consideration). Otherwise, if any kind of goto is allowed and any kind of block is allowed, then we will likely both have micro-metrics that back both goto's and blocks, and thus have to compare apple metrics to orange metrics.

Should you attempt to scope the debate to labeled, scope-restricted gotos, you've already dropped the ball: you can no longer be sure that this really works as a testbed for "net better" and such in general because you're too busy "bettering" the supposed target that might otherwise have proven "net worse". It's worth noting that "any kind of goto is allowed" really means that I can attack "any kind of goto" and you have to (if you wish to defend your point at all) defend "every kind of goto". I.e. picking and choosing which gotos you'll defend (e.g. via invention of new ones) isn't a valid defense of the ones being attacked.

I am *not* obligated to show "net better". I'm just showing that there are specific metrics that do favor goto's, at least certain kinds of goto's. This is to address your "superset" claim.

Claims about "certain kinds of gotos" DO NOT apply to "gotos" because "certain kinds of gotos" are not JUST "gotos". You can't defend that sort of equivocation. And I never said you were obligated to show "net better" - you only said you were attempting to provide a "testbed" for claims of "net better", so your only obligation there (if you feel any obligation at all to do what you'll say you do) is to provide a good testbed. That's where I said your approach is failing.


Another issue with limiting the discussion to "existing languages" to make the comparison "practical", if we are committed to looking at it from the practical angle, then the frequency of language usage should or could also matter. Thus, if you really want to play the practical card (to get around a stumper), then you need to be ready for the full consequences of opening that gate. --top

How foolish your statements are. I didn't play the practical card "to get around a stumper". I played it to limit my ways around stumpers. After all, NOT limiting myself means I get to use all speculative and hand-wavy forms of blocks and other goto-replacements, whereas you would still be either defending 'gotos in general' or (more likely) continuing your current behavior of making irrelevant arguments involving specific species of gotos ('irrelevant' because saying 'this particular species of goto doesn't have problem X' doesn't make problem X go away as 'evidence against gotos'). This could ONLY make things harder on YOU, not me. Something in your internal logic is broken if you think otherwise (not that this is news to me; your current behavior of making irrelevant arguments is proof enough of that).

You are projecting. You are the hand-waving one, not me. You add arbitrary changes when you are in a bind. Admit it! The "goto's in general" argument of yours does not hold logic water. Get over it. You spend more text talking about how evil and hand-wavy I allegely am rather than solidying your logic. Its getting overly overly overly redundant. --top

You are completely unaware of how much hand-waving you've been doing, Top, because you are ignorant of your own fallacy. Your attempt at "logic water" thus far has been some rather nasty urine, and I'm really NOT willing to "hold" it.


[Because blocks trivially segment a program into identifiably delimited areas whereas GOTOs do not, a program can be automatically abstracted into blocks, procedures, and two basic control structures without concern for individual statements, thus making it easier to perform (automated) reasoning about the overall structure of a program than is possible with GOTOs. This is objective evidence against GOTOs, by way of the Boehm-Jacopini Theorem.]

How is "easier" being measured? Where's the rigor?

["Easier" in the mathematical sense, i.e., fewer operations are required, in this case because the work of converting an unstructured program to its structured equivalent prior to analysis has already been done for you.]

And according to this, the theorem may be wrong: http://ecommons.library.cornell.edu/handle/1813/9478

[Interesting. I'll have to read the paper in detail, but it would appear at first glance that whilst the Boehm-Jacopini Theorem is not strictly correct -- there is a class of unstructured programs that cannot be represented as while programs (i.e., composed of sequential statements, while loops, and if-then-else constructs), they can be represented as nested loop programs (i.e., composed of sequential statements, unconditional loops, and if-then-else constructs) with breaks. However, in connection with my point, this does not contradict the fact that structured programs are easier (see above) to decompose for analysis than the equivalent programs written with GOTOs.]

But the same could be said about LISP versus almost every other language. It's simpler syntax makes automated parsing and analysis easier. This is one reason why its used in AI projects requiring machine program generation. However, translating that to clear-cut practical differences is the hard part. This is because psychology matters.

[Irrelevant. If the intent of this page is to show that there is rigorous objective evidence against GOTOs (as an example that rigorous objective evidence can be found for or against any 'X'), without reference to psychology, then I have provided it. If the intent of this page is to argue endlessly about minutiae, intending to demonstrate that no 'X' is universally objectively "better" (for some as-yet undetermined notion of "better") than 'Y', then it is a trivial conclusion. Similarly, I'm sure you can find specific cases where it is better to be crushed by a brick than have a nice meal, but that way lies infinite and pointless quibbling. The overall value of any given 'X' is determined by circumstances, but enumerating every possible circumstance is a fool's errand.]

"Decompose for analysis" is still fairly vague and not necessarily tied to reality because people don't use machines for such currently.

[It need not be more specific for my argument to hold true, and may be as simple as mechanisms to, for example, to provide block highlighting or automated diagramming in an IDE as an aid to visualisation.]

Further, "less code" (less math) is similar to the "less language atoms" metric I used before, which you dismissed. I could dismiss "less code" for similar reasons.

[I don't recall having any discussion with you about "less language atoms". I believe you're confusing me with someone else. "Less code" and "fewer steps required for analysis" are not the same thing. What I have discussed with you before is your lack of formal education and/or background in computer science and apparent refusal to acquire these. It is an ongoing hindrance in these discussions. It's like trying to discuss lunar geology with someone who believes the moon is made of cheese.]

How are you measuring "fewer steps" if not some form of code/notation volume? And if you ONLY wish to converse with those familiar with academic research vocabularies, then I suggest you end this debate. I also suggest you perfect your articulation skills beyond an academic setting if you wish to sell your GoldenHammer to practitioners. The world will not always want to work with you are YOUR terms. I'm just the messenger. Your impatient, aggressive style will not achieve what you think it will. --top

[I don't have to measure the number of steps, because, as I stated above:]

Being convertible by itself doesn't tell us much. Nor does "suitable for structural analysis" if its not a tool actually used by practitioners but merely an academic toy. Where's the rigor for "suitable"? And goto pattern analyzers are also possible. I can give you examples if you want. You didn't seem to disagree with this, but merely said the algorithm is shorter for blocks, which brings us back to the "code size" metric issue.

[The "algorithm", as such, is certainly shorter for structured programs. Because they're already structured, it isn't needed, hence its "code size" is zero. The problem with most "goto analysis" is that the output tends to be a flowchart -- which is little different, at least conceptually, from the input. Analysis of program structure, however, can produce (for example) nice mechanisms for automatic highlighting/selection of blocks or high-level diagrams, with minimal parsing complexity. It's also used, as noted below, in compiler/interpreter optimisation. As such, this isn't an academic toy, but the foundation for useful components of real-world development environments and compilers.]

[Academic "research vocabularies" are not what I'm talking about. I am targeting only the level knowledge that would be expected of any educated practitioner in our field. By way of analogy, if I were a medical researcher it would be almost impossible to discuss medicine with an uneducated nurse or doctor; indeed, such a "practitioner" would be in violation of law in most countries for assuming such titles, let alone the fact that there would be no common ground -- except symptoms -- to discuss. As for an "impatient, aggressive" style, please do not mistake conciseness for aggression or impatience. It is neither.]

Good doctors can explain just about anything if both parties are patient enough (pun). Sure, chemistry is sometimes hard to explain, but there are often common counter-parts to compare the general process to. For example, proteins tend to cling onto other proteins when there is kind of a key-to-door match between them. If the key or lock is damaged or changed, then the two may not fit so well. The wrong medication is like the wrong key: it won't fit the target proteins properly and thus not lock-on to the right "door" to do its work. People in academia too long often lose or never gain the ability to communicate to non-academics and get frustrated easy.

[The problem is not with explanation, but the fact that you attempt to argue points -- some of which are trivial, and some of which rely on in-depth theoretical understanding -- from a lack of understanding and an apparent unwillingness to gain it. It puts the burden of explanation entirely on your opponents -- which is unfair to begin with -- and even then you make little attempt to grasp the material, no matter how it is presented, and even when it is presented again and again and again.]

Once a newbie office worker asked me what the difference is between "filtering" and "sorting". At first I was talking techie gobbledy-gook, but then switched to a laundry analogy. Sorting is like grouping the entire wash results while filtering is selecting only some of the clothes, such as only washing the dark colors. It clicked.

If I have difficulty communicating to non-techies, I will tend to blame myself and think about better ways to describe it rather than call the user "dumb". If I try hard enough, usually I can find a way to communicate what I need to. In explaining the difference between "fat-client" and "fat-server" apps, I've resorted to using elves. In client-centric apps, most of the elves are in your own garage and receive raw toy materials from the north pole. In server-centric apps, most of the elves and raw material stay at the north pole and they build-to-deliver what you ask for over the phone. One does not have to worry about elves with different skill-sets or tools at each different location (version variation) if they are all at the north pole. [Furthermore, I don't know what GoldenHammer you claim I'm selling. Such a comment only serves as further evidence that you're either (a) not reading what I'm writing, or (b) understand so little of it that there really is no point to this discussion.]


I believe most people prefer blocks because they improve readability. Error prevention is secondary to them. I know you like thinking a lot about compiler-time checking, but that does not mean its the reason that others use or care about something. We may get further if you focus on that aspect rather than error prevention, which looks like a dead-end anyhow.

I don't argue readability because I have no way to measure it. As such, I can only consider any such argument I made to be utterly bogus. Unless you've the data, I'd say the same thing about any such argument you made. And the primary argument above is doesn't focus on compile-time checking (syntactic prevention or probabilistic reduction of certain classes of errors != compile-time checking).

Being hard-to-measure is not necessarily the same as "not important". SovietShoeFactoryPrinciple again.

Perhaps. But you or me or anyone else would still be guilty of fraud if we used 'readability' to push a conclusion without having the data or evidence. Perhaps you could measure 'errors introduced in maintenance' or something as evidence of readability and its converse. Benefits being entirely subjective, with no objective component anyone can point at as having improved (e.g. productivity), IS the same as "not important". Psychology can't be said to matter except where it provably (objectively) impacts something objective.

I expect you could justify claiming readability as 'important' by use of reason, but you still need a few metrics to operate as 'evidence' of readability. Could you justify by use of evidence and appropriate logic that goto code is more or less 'readable' for business code? If not, you still have no place using 'readability' in any argument about the merits of gotos vs. block code. At best it will just be conjecture.

Again, you are committing the fallacy that "difficult to measure" means its a factor that should be completely ignored when judging tools.

There is no such fallacy. There is, however, a simple computation-theoretic fact that might loosely be described as GarbageInGarbageOut. Factors you have not measured (directly or indirectly via logic) or for which you have poor measures should be completely ignored when judging tools.

I disagree. Again, most businesses would fail if they took that route. It is also one of the reasons why FormalMethods have failed to catch on I believe. Over-optimizing for what can be easily measured may harm what cannot be easily measured. Again (who deleted 1st?) techniques for science and techniques for tool success may differ.

Your disagreement is immaterial. GarbageInGarbageOut is still a fact. And your claim that businesses would fail if they ignored garbage information is also suspect, seeing as you're just speculating on that outcome and asserting it as fact.


Another problem with your metric "can go to wrong label": how is "wrong" being measured? I think you would agree that one can "nest blocks wrong". However, nobody has found a clean way to measure that such does happen or takes place. It seems to be "in the head" so far, draining its potential rigor.

I agreed that it was possible to "go to the wrong label", and therefore gave you credit for the existence of such a potential error. But it is also possible to "nest blocks wrong". Thus, there is a kind of error that can exist with blocks that does not exist with gotos. So its one apple versus one orange.

Agreeing to the existence of something is not the same as agreeing it can be rigorously measured. I agree that "fog" exists, but I don't currently have a rigorous and useful way to measure its boundaries.

Blocks can be nested incorrectly, but all such errors can also be performed by use of gotos or prevented by varied flavors of blocks. Thus "superset". You can attempt to defend by focusing upon label-uniqueness-guarantees in some languages, but blocks may also be labeled in that sense if you wish it in your language. I've seen one language that did while loops as such:

  while in (label) with (condition)          while doing (label) with (condition)
  loop body                            or    loop body
  still in (label)                           still doing (label)

E.g.
  while in bed with several(women)
    do something with each woman among women
      (do stuff)
      if (condition)
        leave bed
    still doing something
  still in bed

This particular language was constructed as something of a joke and allowed quite a variety of different prepositional phrases to make it more English-like (e.g. 'in', 'do', 'doing'), and these didn't really need to match ("while doing X (body) still in X" would be okay). Labeled blocks are still blocks, and as such don't allow you to mess up the loop-endings (e.g. "still in bed" cannot appear twice and any internal blocks must be fully balanced). They also allow one to jump out of an internal loop (as opposed to the unnamed 'break'). I know several languages with named blocks, though only EeLanguage among them is anything near mainstream (albeit lacking the flavor of the above language).

If you wish to focus on certain flavors of blocks, I shall feel free to focus on certain flavors of gotos... such as goto <line number>.


7/31/2008 - Is the other debater withdrawing their rigor claim? I didn't see a response. -Top

No, the other debater has come to the realization that you, when backed into a corner, resort to ignoring all argument up to that point, pretending that explanations of an argument somehow weaken it, all the while whining and shrieking and waving his hands about what 'appears' or what he doesn't 'see' as though closing one's eyes and pretending is a valid form of argument. As such, the other debater believes only fools would have continued the argument with you as long as they have. In the interest of avoiding unnecessary foolishness, the debater has withdrawn from the argument. The claim, however, stands, and remains unweakened by the straw men you have presented thus far.

Me? You are in the corner, Dude. There ain't no hole-free rigor here. --top

Ah, yes. Of course... believing whatever it is you want to believe is something you do very well.

Projection.

So you believe.


I'm not clear here: Is this page (or, I suppose, its parent) a genuine attempt to promote GOTOs as superior to structured constructs in all cases, some cases, or a few cases? Or is it an attempt to make a point about rigour, software metrics, objective measurement and the like and merely using GOTOs as an arbitrary example?

Goto's is being used as a testing ground for "rigorous evidence that X is net better". It is something we mostly agree on and so removes the element of disagreement about its worth to focus on evidence issues. I personally "like" blocks, but don't claim them objectively better than goto's. I believe it has something to do with my psychology and not some universal property of blocks outside of humans. Thus, this topic is more about studying rigor than of studying goto's, but may end up shedding light on both.

Using GOTOs as an example, there are some classic papers (search for the Boehm-Jacopini Theorem as a starting point) that usually get trotted out in first-year Computer Science (or final-year Software Engineering) courses. However, what a mathematician or computer scientist regards as "net better" (i.e., more elegant, more efficient, more abstract, simpler, etc.) may be different for someone else. Any argument like this one depends on an agreed-upon and measurable definition of "better". Until that exists, all specific points are moot.

Tying it to actual practice is indeed where the conflict seems to be. --top


Tentative Summaries (Perspectives)


CategoryBranchingAndFlow


JulyZeroEight


EditText of this page (last edited January 10, 2014) or FindPage with title or text search