Can the history of GoTo and other older ideas be used to predict the future or make decisions about current language features?
(moved from HowImportantIsLeanCode)
[...Closure example elided...]
You may want to read LessonsFromHistoryDiscussionSummary before getting sucked into this ...
What Did and Didn't Goto's Teach Us?
The lessons from Goto's seem to be:
- Visual cues (to program flow) seem to be favored
- Cross-developer consistency/uniformity seems to be favored
- Some will resist changes that later prove to be popular
Unanswered questions:
- Can consistency/uniformity be measured?
- Yes, to an approximation. See KolmogorovComplexity. However, it cannot be measured absolutely (absolute KolmogorovComplexity is undecidable).
- Too the best of my knowledge, nobody has used them on Goto evidence.
- Question: how much does OnceAndOnlyOnce contradict consistency/uniformity?
- Were experienced goto-ers as productive as "blockies" due to years of practice with goto's and would it make sense to force them to change immediately? (MindOverhaulEconomics)
- Were there "goto patterns" or thinking techniques that skilled goto-ers used to be more effective? (Lost-art hypothesis)
- Definitely yes. Loop constructs, for example, are well established among the goto patterns, and I learned quite a few patterns when I was teaching Assembly to classes of students. But see also AreDesignPatternsMissingLanguageFeatures.
- Which newer ideas are analogous to the goto lessons and which are just fads or domain niches? (HolyWar fodder, as below shows)
- Hypothesis: any component that meets GreenspunsTenthRuleOfProgramming in the sense that if your language doesn't have it, lots of people are going to badly reinvent it.
- Under that hypothesis, modern language design would most benefit from: (1) easy concurrency and atomic operations (e.g. first-class processes and SoftwareTransactionalMemory - ideally composable atomic ops), (2) easy communication and serialization (for communication, distribution, and persistence), (3) data and knowledge management support (e.g. relational and knowledge systems built into the language), (4) proper security management, (5) garbage collection. All these I've seen badly reinvented again and again and again. We fight with languages to make these happen, and we stumble over, around, and into tons of pitfalls while doing so: deadlock, livelock, stomping on shared memory, weak parsers for variant-typed data (often regexps at best), micro-databases in code that only ever fulfill half the functionality you desire of them, memory management for shared data, and signaling systems that, lacking proper message queues and semaphores, end up being infinitely recursive. And 'security' is done wrong every time, but more often people (even for military applications) give up on it and simply cross their fingers.
- See also: FutureOfProgrammingLanguages.
This summary is intended to help find points of interest in the following sprawling mess. It is almost deliberately wrong ...
- Why do you get to shove a summary of your side of the story at the top like this? It is rude.
- [But, as a meta-commentary on the content, it is accurate.]
- Thank you. I observe that such a summary is useless anywhere but at the top of the page.
- Bullshit! It's an opinion. The problem here is caused by those who feel they are above evidence. Prove it or move it.
- The "discussion" is one of the usual "More heat than light" interchanges that ensues when person A really doesn't get the point, person B tries to explain, person A goes into "scoring points debating" mode, and person B gets sucked into it.
- Couldn't you just have said something like, "This is a long and meandering debate"? Digs like "deliberately wrong" are uncalled for. Grow some people skills.
- [I believe "deliberately wrong" was this section's original author's self-deprecatory comment about his own summary. It does not appear to have been targeted at any participants in the debate.]
- Correct. The subject of the first sentence was "This summary". The "it" referenced the subject, not the most recent noun.
- Closures are a tool.
- Some people can't see the point of closures, and argue that they're basically useless. There appears to be no evidence offered for this.
- This is a strawman because I never claimed to have such evidence. But the flip side is that there's no evidence that they'll improve code more than about 5% either (except in rare circumstances). The evidence just plain does not exist. Don't add crap to languages until its PROVEN to carry its weight. That is fair and practical and rational.
- So you say that closures amount to, or are akin to, MentalMasturbation, and now say that you have no evidence for this?
- Let me get this strait: you want me to objectively prove subjectivity?
- {I suggest we shouldn't add crap to languages in any case, but I'd note that you need to add them to languages BEFORE they can ever be PROVEN to carry their weight - a feature not part of a language will, quite frankly, never be tested. As to whether closures should be added, it depends on whether the language possesses any equivalent set of features. As to your 'improve', '5%' and 'rare circumstances', I'm quite certain you've invented the statistic and that you're no expert on what mean the other two. Closures have certainly proven useful to those programmers for the languages that have adopted them, but even I, as a language designer, can't give you rankings for how much or how little it has 'improved' the programmer experience over the various alternatives (e.g. dynamic strings).}
- Others might argue that until they become wide-spread and well-understood, people won't know how to use them and their benefits won't be realised. The point is argued from the analogy of GOTOs versus block-structured programming.
- People associate what they hate to goto's. It doesn't tell us anything. They are like Hitler to political discussions (GodwinsLaw) as in "your political beliefs are just like Hilter's". Compare: "Your favorite language feature is just like Goto's."
- [You've missed the point. As the one who introduced GOTOs into this discussion, I can assure you I don't hate them. GOTOs were merely offered as an example of a construct whose replacement with structured programming was once regarded as "borderline MentalMasturbation". Now, for the vast majority of applications, we regard structured programming as foundational. Likewise, closures may today be considered "borderline MentalMasturbation", but in a future programming world might be considered indispensable. Learn them now; you might need them later.]
- You claim to be able to predict which features will be considered indispensable in the future and which will be forgotten? I would like to predict that relational and/or closer ties to set-theory will be expanded into app languages, but realize now that hype controls the market more than good ideas (at least in the short term). Predicting the future based on past patterns is what this topic is about. That'd be more interesting than the never-ending FP debate.
- [I claim no predictive ability beyond the norm. I do, however, constantly learn new paradigms, features, approaches, models, and standards -- and endeavour to recognise and leverage their intended value -- because I can't predict whether they'll be significant or not, but they might become significant. In the case of some features, I might even financially benefit by gaining a competitive advantage by using them before they become popular. This topic is not about predicting the future; in a nutshell it's about (a) not foolishly deprecating features now that might have future significance, given that we've seen marginal features of the past (GOTOs and high-level languages) become foundational; and (b) the ineffectiveness of chucking stones at things you don't understand merely because you don't understand them.]
- Why aren't you as interested as honing your skills in presenting objective and clear evidence as you are in playing with language features? If there was a feature that I really liked and felt its reasons for superiority were objective, I'd try to find ways to present the evidence that would be hard to dispute instead of call people names. Although I don't claim my favorite techniques are objectivity better, I do try to describe why I like them. (See PayrollExample for an example).
- [Why am I not as interested in honing my skills in presenting objective and clear evidence as playing with language features? Because in these discussions, the former is tedious, time-consuming, is demanded by you and you alone[1], and gains me nothing; the latter is fun and might give me a competitive advantage that makes me money. For the majority of serious, forward-thinking developers, the mere presence of a new feature is sufficient justification to devote time to exploring it without requiring a set of qualifying metrics and real-world statistics. Individual developers can decide whether a feature is of value or not based on their own and their users' criteria. It's only for you, it seems, that a scientific study must be done on a feature before you'll consider anything but deprecating it.]
- [1] [And, your past behaviour does nothing to convince me you'd even take the time to read the evidence presented, let alone understand it. Having apparently invested so much time in being anti-<x> where <x> appears to be any language feature or paradigm introduced since 1972 -- combined with your habit of quarrelling over illustrative minutiae whilst ignoring the big picture -- I have doubts that any evidence, no matter how strong, would convince you of anything.]
- Those are just excuses. You don't even try to figure out why you like X and turn it into a clear, non-psychology-dependent description. As far as making money, since when did employers give a flying fsck about closures etc? It just makes hiring replacements more expensive for them. You live in a fantasy land in which your MentalMasturbation pays off; until you wake up. And you are wrong about me being the lone proof-wanter. The issue keeps popping up: http://lambda-the-ultimate.org/node/893 Just admit that your preferences are a PersonalChoiceElevatedToMoralImperative. Its the psychology, stupid! The truth will free you.
- [So... At the top of this threadlet you object to someone catching you out on your failure to provide evidence for your position that closures are "borderline MentalMasturbation," but down here you request equivalent evidence of me? And in the same paragraph that you ask me to justify my preference for <x> via a "non-psychology-dependent" description, you claim "[i]ts the psychology, stupid!" and apparently find that sufficient to justify your preferences? Why, that sounds fair and entirely un-hypocritical! By the way, when I was an employer I gave a "flying fsck" about closures and any other feature, paradigm, language or approach that might give my company an edge over competitors, and hiring a few highly-productive developers who embrace new paradigms and approaches is actually less costly than hiring more developers -- even if they're cheaper -- who use only traditional approaches. That's why we developed a custom DBMS and associated declarative language, plus C-based libraries, for developing business applications when most of our competitors were still using DbaseIII. It's why we later re-developed the DBMS in C++ and embraced OO when our competitors were still paddling about in Clipper and FoxPro. In the niche in which we worked, we ate our competition. It enabled me to essentially retire while still in my 30s and enter academia, where I can now devote full time to that which interests me most -- exploring and leveraging new paradigms and tools to competitive effect for developing business applications, and encouraging new programmers to do the same. With luck and perseverance, I expect some of them will use the same approach to eat their competition.]
- Reply at EvidenceRantsContinued.
Now read on ...
Closures are one of those things that look cool on paper, but are difficult to translate into significant practical benefits. They're borderline MentalMasturbation in my opinion.
Structured programming is one of those things that looks cool on paper, but is difficult to translate into significant practical benefits. It's borderline MentalMasturbation in my opinion.
High-level languages (like FORTRAN and COBOL) are one of those things that look cool on paper, but are difficult to translate into significant practical benefits. They're borderline MentalMasturbation in my opinion.
GOTOs and assembly language are good enough for me. Are they good enough for you?
Assembly is more lines/volume of code per algorithm, and GoTo's are more inconsistent across programmers and lack the visual flow cues of indentation. Anyhow, rather than get into yet another general paradigm HolyWar, how about we focus on the above example. How would closures clearly make it better?
My point was broader, and perhaps more subtle, than to suggest we should stick to GOTOs and assembly. Back in the day, GoTos and structured programming were the subject of considerable debate. From a historical perspective, we can see that the higher-level abstractions gained preference for what we now consider obvious reasons, and (at least) structured programming and high level languages now seem reasonable choices for most purposes. At the time, however, the need for higher level abstractions was much less clear than it is now -- the goals of most programming were much simpler, and a GOTO or two for the average working programmer seemed as reasonable as the structured equivalent.
- That's hogwash. I've seen many production COBOL programs, and many were far from simple. In fact, it appears the authors "mastered" GOTO's in ways that we would have a hard time relating to. Perhaps there are even "goto patterns" that escaped documentation.
- You are conflating "complex" with "large". Large CRUD-fests, which many COBOL programs are, may be lengthy but are generally not complex per se. How many historical COBOL programs need to, for example, simultaneously deal with multiple threads, complex data structures, interactive event-driven GUIs and interprocess communication?
- You are being naive. Business rules ARE often complex. The need for multiple threads is often exaggerated anyhow for biz apps. And, the "complex data structures" are generally kept in RDBMS, not RAM pointer behemoths like one does in school.
- Business rules are frequently intricate, they are rarely complex. Decomposed to their essentials, they tend to involve an intricate, but straight-forward collection of conditional operations. This is especially true of the legacy COBOL applications to which this section applies. You introduced both COBOL and business applications as non sequiturs, or an attempt to narrow the debate to your home turf; my mention of threads, complex data structures, event-driven GUIs, etc., was an attempt to bring the discussion back into a general domain. Legacy COBOL applications generally predate RDBMSes, which were not commercially available until 1978 -- by which time the structured programming and high-level language vs. assembly language debates had largely faded away. Legacy COBOL applications were more likely to use custom file-based storage mechanisms, or ISAM (and later VSAM) access methods via facilities provided by the operating system.
- If you want to demonstrate that closures are a significant help to everything but biz apps, be my guest. As far as "intricate versus complex", I would welcome a more detailed comparison. I also agree that pre-RDBMS apps were often messy at dealing with "complex data structures". But, I don't see how that reinforces your points.
As such, the choice between GOTOs and structured programming -- or between assembly language and high level languages -- was often more a matter of personal preference (or psychology, as you put it) than rationally-based decision. That may seem surprising or illogical now, but keep in mind the context of the time.
- For one, it is possible to describe to some extent why we don't like goto's: lack of cross-developer consistency (but that has escaped objective measure) and lack of visual flow cues (use of multiple levels of indentation). Further, there's no evidence that experienced Goto programmers were significantly less productive than blockers. A "mess" to you and me may not be a mess to them. But, we have no real data on that, because most Goto'ers are retired or dead. You may be a premature anti-goto bigot. If a goto expert is as productive as a block expert, why the hell should anybody tell them they're "doing it wrong"? (Let's assume that like-minded people will maintain each other's code.)
- Likewise... If a "block expert" is as productive as a "goto expert", why the hell should anybody tell them they're "doing it wrong"?
- I'm not. It'd be nice if everyone could use the tools of their choice. It's just not practical, so we must force somewhat arbitrary standards on developers to obtain consistency in order to share work.
- Fair enough, if it's a programming project and you're the project lead. However, the programming discussions that constitute this Wiki aren't a programming project, and you're not the project lead. Therefore, if the above is truly your view, then carrying on your persistent, unsubstantiated deprecation of features that other developers like -- and that obviously get the job done in the projects they're working on, and which are presumably sanctioned by their project leaders -- is nothing but quarrelsome trolling on an open forum. Furthermore, I suspect most of the participants here are at a position in their careers where they get to use the tools of their choice, and probably even impose that choice on others. Please go learn about types, object orientation, closures, functional programming -- and whatever else you've splattered with reactionary opinion in the past -- and come back when you can discuss these things at a level of understanding in common with the people who use these tools on a daily basis. You'll never convince anyone of anything if you persist in arguing from a position of obvious (to us, maybe not to you) ignorance.
- I think you are ignorant with regard to evidence-presentation skills. If I am suppose to read up on your pet GoldenHammers?, when why shouldn't you also read up on scientific presentation skills? If you love closures but are poor at demonstrating clearly why they improve stuff, then you have no reason to complain when most people ignore them. If you claim that closures work better for you personally, that's fine. It was implications that they were an objective GoldenHammer that started all this. In other-words, absolutism started this. I blame this ThreadMess on aggressive absolutism. --top
Your frequent arguments against closures, functional programming, object-oriented programming, and so forth remind me of these classical debates.
- Odd, OOP's navigational nature is a 60's concept that was replaced by what I consider a higher abstraction: relational. I consider navigational the "goto" of data structures. The pointer-centric OOP is simply navigational with behavioral wrappers. Hard-wiring in relationships and grouping instead of calculating them as needed is hardly "more abstract". You simply call your pet tools "high abstraction" and what you hate "low abstraction".
- "Hate" does not apply. I neither hate nor love procedural programming, the relational model, GOTOs, object-oriented programming, HOFs, closures, or any other programmatic or conceptual construct. They are merely tools; I have no emotional connection to any of the tools in my toolbox. ("I love my screwdrivers but hate those $%&!!! hammers!" Now replace "hammer" and "screwdriver" with any programmatic approach or construct you like. Ludicrous, no?) I base my measure of higher-level abstraction vs. lower-level abstraction purely on the degree to which the construct directly resembles or mirrors the concrete machine. The more it's like the underlying iron, the more it is a low-level abstraction. The less it resembles the iron, the more high-level a construct is, especially if it is built from lower-level constructs.
- {I think, regarding this, you're probably a bit on the unusual side. Most people who do a lot of work with their hands (especially hobby work) DO grow emotionally attached to their tools. They're often reluctant to trade old, faithful tools for a newer or better set even if the newer set is measurably, objectively better for the tasks they're performing. And most people who do a lot of work in their heads DO get emotionally attached to their both their ideas and the mechanism they utilized to achieve them (to maintain your own health: don't dis logic in front of a scientist, and don't dis faith in front of a zealot). Programming is somewhere in between, but I am not surprised at all to see 'weenies' and 'hype' and an equal amount of irrational emotion involved in the decisions people make when programming.}
- Perhaps that is true. Hobbyists, especially, do become attached to their tools. Professionals, in my experience, not so much. I worked in a garage some years ago and continue to deal with professional mechanics on a regular basis. I've never seen good mechanics treat tools as anything but a means to make money. Good mechanics are strictly out to get repair jobs done as quickly as possible with the least likelihood of returns, not make love to their tools. Cheap, crappy tools are universally decried, and clever home-made tools are afforded a certain pride, but I've never seen anything remotely like the irrational favoritism or hatred some programmers demonstrate toward certain paradigms, approaches, and languages. While I have seen particularly effective mechanics' tools treated with high regard, when the Snap-On(tm) guy shows up to sell a newer and better wrench, if it does a better job, it gets bought and the old tool gets sold or tossed out. Of course, that doesn't preclude arguments over which tool or approach is best for a particular job. For example, should you press out E34 BMW sub-frame bushings with the factory tool, or should you cut them out with a Sawzall? The debate is ongoing, but I've rarely, if ever, seen such debates reach the incandescence that choice of programming tools and approaches seems to engender. However, I shouldn't make out mechanics to be perfect, because they're not. There are issues that sometimes draw similar irrationality. For example, ask a group of BMW specialists what are the best after-market brake pads to use, or what engine oil to choose, or which tires to buy, and you'll probably see opinion and bluster just like you see here about programming issues. Of course, that's probably because there are no clear-cut answers as to which is "best." However, among professionals I've never seen such debates reach the duration, heat, and emotional intemperance that often appear here. Perhaps it's because many programmers lack the level of maturity to recognise when a preference is merely that -- a preference. Or, perhaps it's because programming is as much art as it is logic. Or, perhaps it's because programmers are more likely to be overly-sensitive, opinionated, drama-queen artistes than the typical pro auto or motorcycle mechanic. I know not. However, I do know that the best programmers I've worked with have the same attitude toward programming tools that the good mechanics have toward theirs -- they choose what they feel is the most appropriate tool (i.e., paradigm, approach, language or platform) on a case-by-case basis, and the best programmers know and can effectively use a wide variety of paradigms, approaches, platforms and languages. Categorical deprecation of any usable paradigm, approach, platform or language is a mark of a poor programmer, or at least one with an unpleasantly amateur attitude. That doesn't mean debate over technical choices doesn't occur, because it obviously does, but it rarely if ever reaches the irrational, unsupported, emotional fervor that I've sometimes seen here, and it's far more likely to revolve around a checklist matrix of objective strengths/weaknesses/risks/costs vs. requirements than arbitrary opinions.
- {Part of that may be observational bias. How often do you peruse forums dedicated to mechanics?}
- I peruse those dedicated to mechanics on an as-needed basis, generally when I'm looking for something specific -- so perhaps monthly. However, I'm a participant on several forums (and I host one) dedicated to various vehicles in which I have a particular interest (I'm a bit of a car and bike nut), upon which a number of professional mechanics with similar interests are active participants. Lengthy, emotional debates are almost invariably sustained by amateurs. The pros and even the more serious hobbyists tend not to participate in these, except to toss in the occasional quip, fact, clarification, or correction. Of course, I will freely admit the possibility that both the forums, and the people I know in person, are not reflective of the norm. But I doubt it; my interests are broad enough to hopefully eliminate any bias in that respect.
- Are you saying that pro's don't have opinions about what is better, or that they realize that its futile because people have personal preferences and weigh different features differently?
- {That sounds to me like a false dichotomy.}
- Pros generally recognise the profound difficulty of defining, let alone assessing or convincing anyone of, what is "better".
- Real pro's either recognize that the benefits are probably subjective, OR are smart enough to present good solid relevant evidence. Pretenders with strong opinions think they are above evidence.
- So... You persistently express strong, unsubstantiated opinions -- e.g., lengthy diatribes against OO with no sound rationale, or quips like "They're borderline MentalMasturbation in my opinion" sans evidence. That means you are a... What?
- I've explained my rationale already: OOP is being shoved down our throats without first providing evidence of superiority, instead using oversimplified shape, animal, an device-driver examples to justify its spread. I just saw an OO "reuse" claim in an Oreilly book the other day. It's overhyped.
- {My own observations on human nature lead me to a conclusion that mechanics are no less susceptible to emotion than are programmers: people, in general, become emotionally invested in those things regarding which they have physically or temporally invested. (That's part of the reason that it's easier to keep a scam... or a religion... going, even despite counter-evidence.) But, perhaps, it is also an issue of product vs. method. For mechanics, the product - their 'baby' - is the construct, not the mechanism by which they constructed it. For (many) programmers, however, the product - their 'baby' - IS the programmatic mechanism (and pattern) by which it operates. When someone else comes along and says it's a bad mechanism for reasons X, Y, Z, these programmers get defensive - you've attacked their baby. It would be like insulting the car constructed by the professional mechanic, or the food painstakingly produced by a professional cook.}
- That makes sense. I certainly agree that mechanics are no less susceptible to emotion, but my experience is that professional mechanics are far more likely to become emotional over the same hobby or non-professional interests that anyone has -- sports, politics, religion, gardening, woodworking, cooking or whatever -- than the technical aspects of their jobs. Where mutual recognition of equal ability exists, the pros seem more likely to respect different approaches or professional opinions -- or at least demonstrate a certain cynical detachment to these -- than certain programmers, many of whom can't seem to resist tearing a strip off the other guy for merely having a different preference. Witness, for example, the heat often generated by Linux vs Windows debates. With a few exceptions, these seem to be mainly the provenance of amateurs or poor pros. I've never met a genuinely good programmer who was zealous about one or the other in a professional capacity; most simply use what pays the bills at the time even if they personally prefer one over the other. It's certainly not worth arguing about. Discussing, maybe; arguing, no. Indeed, the best programmers I know are usually eager to explore new paradigms, approaches, or platforms, and regard them as an interesting opportunity for discovery, or at least a chance to play with a new toy that might turn out to be a useful tool. Likewise, I've never seen pro mechanics get as heated up over (say) Ford vs Chevrolet; plenty of jovial ribbing may go on, but there's little to no emotional investment, and indeed they're likely to claim both are crap despite what they may buy or even prefer for their own use.
- {Well, the Linux vs. Windows debates are based in politics, economy, and philosophical ideals. Technical capability is, perhaps, a battleground for those arguments, but is certainly not the cause of the battles. There is much to be gained or lost regarding commercial investment based on choice of platform... based not only on what YOU choose, but based also on what EVERY OTHER person chooses. Improved platform popularity means improved platform support. This is a major cause for arguments regarding game consoles, too - a popular console gets more games, which is a win for the owners of said console. Some of that may also be seen in the other arguments discussed here. For example, on his 'tabilizer' site, top does indicate that part of the reason he is so vociferous about TableOrientedProgramming is that he wants some of the $cash$, brains, time, marketing, and resources currently pipelined into advancing ObjectOriented designs and approaches be shunted off towards relational designs, which would be a win for top (who is stuck in IT). If one truly wishes to comprehend it, not everything in the technical world should be examined from a purely technical standpoint.}
- And the alternative?
- {Alternative to what? If you're asking about alternative viewpoints to aide in comprehension of the technical world, consider: political, economic, moral & philosophical, sociological, and psychological.}
- If we assume a narrow definition of "technical", then that would be merely things such as performance and hooking up wires. I've long held the belief that software development is mostly about pyschology, not nuts and bolts of computers. Thus, my viewpoint is NOT "technical" to begin with, and the above criticism makes no sense.
- {What criticism?}
- Anyhow, I usually attempt to define my weights in terms of what the customer wants (See "customer" discussion in DesignVersusResultsEvidence). It's their political and economic viewpoint that matters, not mine. In the US, this is usually "maximizing profits without [getting caught] breaking the law". --top
- {I also feel like noting that many of the highly emotional debates on language features even on this forum are ALSO sustained by primadonna amateurs... amateurs of language design and type-theory, for example. Top, for example. Designers of languages and type-systems who truly understand everything about various language features tend to look at each feature as a tool, measuring the costs and benefits of including or excluding particular features (or combinations of features) in language. (Even understanding the features, the job of the language designer isn't easy: with N features there are still 2^N possible combinations before dealing with contradictions and meta-features (like keeping features orthogonal) and such.)}
- True experts ("non-amateurs") are those who can justify their claims with clear-cut relevant examples and scenarios with well-described metrics. Pretenders are those who say, "trust me because I vote myself smarter than you". I only trust those who can show (contradiction on purpose). Further, because ultimately "amateurs" will be the majority who use such tools, their perspective for design may have merit over those who quote ivory towers instead of working in the trenches. I'm tired of evidence-free bullshitters interfering in the industry. You spend more time working on your elite-speak than producing realistic examples to demonstrate your claims. I've provided at least 3 sample apps that you guys could not show signif improvement upon. You talkers cannot pull any of the 3 swords from the stone. If there is any one common theme I find in my detractors, it’s a rejection of open science. It is comparable to the reason whey the Greeks never entered the next phase: the belief that practical external demonstration of ideas was for slaves.
- {True experts can justify their claims. Non-experts, however, usually lack the experience or mathematical background to understand the justification. You're in the latter category. You don't even understand evidence when it is provided. Frankly, top, you are incapable of properly judging whom to trust. A layman cannot tell a physician from a quack. Unless you're willing to do some proper research, you'll forever be stuck with knee-jerk judgements. This is aggravated by your utter lack of proper Critical Thinking skills. You reject proper research by decrying "BookStop" or "ivory tower", and your logic is generally fallacious. Consider your trio of sample apps - examples can demonstrate where a feature is useful, but the converse isn't true: you can't logically argue that a feature isn't useful because its use doesn't provide "significant improvement" in a particular example (even ignoring that "significant" is based on one's goals). And yet you try.}
- Math by itself cannot be evidence of productivity. And a person who is not a physician CAN test their ability. Insurance companies do this all the time via statistical analysis of who is healed or surveys well. (I agree that a physician may be able to find potential problems quicker than statistics can.) You have to describe in detail how "better" is measured and what you are applying it to. Science 101: Here's what I'm measuring; here's why I chose these metrics; here's what I'm applying them too; here's why I chose these tests; here's the results. You cannot do this, and hand-wave it off by saying more or less, "smart people don't need to present evidence". Hogwash! Theory may explain results, but is NOT a substitute for them.
- {When you start controlling people well enough to create 'control' groups and 'experimental' groups well enough to establish formal metrics and collect statistics, let the world know. I can assure you that language-designers, like myself, would absolutely love to have a formal test laboratory against which to experiment with a variety of features, and where the lab-rats aren't overly biased based on possessing excessive skill and experience with particular feature-sets. The closest thing we possess is the GreatComputerLanguageShootout, and that's measuring the wrong things for many of the language design-goals we possess. Until we have such a laboratory, we're stuck with experience, examples, and anecdotal evidence plus math and formal proofs. Math can't measure productivity; it can only tell you what can or can't be done with a language, and can offer a complexity measurement for particular tasks (e.g. to do X, you need to glue together at least N language-features). Experience and anecdotal evidence and examples can't provide you any solid conclusions one way or the other (i.e. they won't let you -prove- something is better than another), but they certainly are convincing to the people experiencing them.}
- You guys are not even presenting semi-realistic examples along with descriptions for why it is allegedly better from a time/motion/thought-process analysis. "I know good stuff when I see it" does NOT cut it. Nor does, "it is good because it has closures". change-scenario analysis (CodeChangeImpactAnalysis) is one of the better techninques in my opinion (although should not be the sole metric). You are not even trying to describe why you think X is better other than just theory-babble. What specificly is it reducing or speeding up? What are the assumed thinking steps that it allegedly saves?
- {Now you're just blindly grouping people and ranting.}
- It's your(s) fault for not signing. I have no other choice than assume you are one enormous crusty ball of top-hating protoplasm.
- Furthermore, you are conflating models for managing code (OO) with models for managing data (the RelationalModel). These are not the same thing. There are valid arguments against using an OO model for data management, and indeed OO databases may hark back to hierarchical and network databases, but this is orthogonal to using OO to implement programs. That said, I think a programming language built on the RelationalModel as a fundamental principle (as opposed to merely implementing it, as do TutorialDee and TopsQueryLanguage) would be interesting, but I have no idea what it would look like. Languages like SETL might give us some hints, though.
- {They'd look a lot like Prolog, possibly with mutable functions.}
- Hmmm... Yes, perhaps.
- Re: "These are not the same thing." - I consider DataAndCodeAreTheSameThing. They are just different views on info. --top
- {Data and code tend to have different properties in practice, but I share the opinion that, ultimately, DataAndCodeAreTheSameThing. It's made most visible in LogicProgramming. E.g. consider prolog: from one viewpoint, each predicate is a function over arguments to a logic-value. From another viewpoint, each predicate is a (potentially infinite) relation over tuples of size one or more. There is no fundamental difference between mutating a function and mutating data (though there are practical variations for optimizations and such). At least ideally, DBMS systems should be capable of utilizing a turing-complete language to describe relations. Further, code and data should have the same normalization rules.}
- {That said, communication and code aren't the same thing, and data and process aren't the same thing. It's an important distinction. Mere examination of data doesn't admit to side-effects, whereas a visit to a procedure almost invariably has side-effects. "[Y]ou (top) are conflating models for managing [process and communication] (OO) with models for managing data (the RelationalModel)" is probably a more accurate expression of the above speaker's intent.}
- As the speaker, I can confirm it does express my intent.
- I don't see how side-effects are a difference. A "to-do" list with "kick dog at 3:20pm" is merely a list in one sense, but also (potential) behavior that does change the outside world. I will agree that in practice data and code have different "flavors" to them, but its mostly a matter of degree. This is because as humans we find some things more convenient represented as textual language and others as data. For example, Boolean expressions can be represented as a data struture (AbstractSyntaxTree), but most prefer it as text. A robot or an alien from the planet Structar may want it different.
- {A "to-do" list describes behavior to an entity capable of understanding it. It is not, however, actual behavior (or even 'potential' behavior). To get behavior, you need to have an actor. An actor might look at the list and decide to kick the dog, of course.}
- To the CPU chip, your EXE is just data. Your EXE is not an "actor" in a strict sense. The only real actor is the physical chip, not your program.
- {Correct. The EXE is code and the EXE is data (DataAndCodeAreTheSameThing). The CPU is the 'real actor'. Or perhaps the universe, acting upon the CPU through laws of physics, is the 'real actor'. Actors are conceptual entities, not physical entities. Since actors are conceptual entities, one can also treat processes as actors (operating systems, PiCalculus, etc.), 'actors' as actors (in ActorModel), threads as actors (Procedural), even massive multi-agent systems (e.g. whole societies) as actors.}
- So there's no real disagreement?
- {I'm not sure there was one, but you did earlier indicate a lack of understanding as to the significance of side-effects. Do you now better understand how side-effects are a relevant difference between data/code (values that ultimately just 'persist', doing nothing) and process/communication/behavior? Can you apply this understanding towards distinguishing models for managing process/communication/behavior from models for managing data (or raw code)?}
- Shove your patronizing straw-man deep and hard. I don't have to take that kind of shit from you. I've been holding back calling you names, but you have not returned such self-restraint. See KeepCriticismNarrow. Side-effects have NOT been shown to change the argument. They are a red herring until proven otherwise. I've read it 3 times and your side-effect argument is still weak. Rather than blame the reader, blame the writer for once. The existence of data by itself says nothing either way about "side effects". If the hourly rate next to my name in a payroll database affects my paycheck amount, then it has a "side effect" on reality. Although it is not directly a "payToppie" command/function, it essentially has the same effect. And an explicit "payToppie" function is merely data from the CPU's or interpreter's perspective. The "final actor" is the hardware, not software. It's as if a robot tells another robot to tell yet another robot to pull the trigger. Data may be further up the chain, but that is merely a matter of degree.
- {You're obviously horrible at judging people, top. I'm not out to get you all the time, but that venom you just spewed certainly makes me regret it. Anyhow, you mention one robot telling another robot to tell yet another robot. Since you're so insistent on turning this into an 'argument', perhaps you can explain to me, top, how the RelationalModel is designed and intended to aide in describing and managing this sort of communication?}
- The robot analogy was meant to illustrate definitional issues and not meant to be a literal development model. I would still like a clearer description of your "side effect" claim.
- {You did not put forth sufficient good-faith effort to understand what I said above before turning around and attacking me. Suffer in darkness, top.}
- I read it over 3 times. I claim it poor writing. Anyhow, a similar discussion is near the bottom of DataAndCodeAreTheSameThing.
At present, typical programming tasks do not yet need any higher level abstraction than procedural programming. Or, at least in the domains in which you work (and perhaps the domains in which most of us work), that is probably true. Whether we use a higher level of abstraction or not is more a matter of personal preference than rationally-based decision -- just as GOTOs and structure programming were once as much personal preference (or sheer weight of familiarity) as rational decision.
Therefore, there is probably little point in deprecating someone's preferences, whether they're a choice of higher abstractions -- such as closures, HOFs, etc. -- or a preferred choice of ice cream flavour. History shows us that no amount of railing against structured programming or high level languages had any significant effect. You may find it much more effective to devote your efforts to illustrating how delicious your ice cream flavour is -- i.e., show us how powerful pure procedural programming can be -- rather than continuing your persistent and poorly-defended pokes at how yucky other ice cream flavours -- closures, functional programming, object-oriented programming -- are.
- Your ranking of various tools as low/high abstraction and old/new is a figment of your imagination and PersonalChoiceElevatedToMoralImperative. It's like political debates where the other person's political views are "like Hitler's".
- I find it curious that you did not address my peace and harmony-inducing suggestion, especially as I have essentially agreed with your frequent point that certain choices appear to be more a matter of psychology than rational decision. Perhaps that's because I've used your own argument to express a view counter to yours, or that you'd rather criticise others' preferred approaches than demonstrate the benefits of yours? Are you sure it's not you who has demonstrated PersonalChoiceElevatedToMoralImperative? But... Never mind, GodwinsLaw was invoked. Therefore, I win!
- It is not clear to me what you are referencing. May I suggest PageAnchor names.
- You may suggest anything you like, however if you look upward a paragraph or two, you'll see the suggestion to which I refer. For your convenience, I shall repeat it: "You may find it much more effective to devote your efforts to illustrating how delicious your ice cream flavour is -- i.e., show us how powerful pure procedural programming can be -- rather than continuing your persistent and poorly-defended pokes at how yucky other ice cream flavours -- closures, functional programming, object-oriented programming -- are." In short, strong advocacy is more likely to prove fruitful than weak (albeit persistent) criticism.
- I didn't start this. It was started when somebody said the procedural example was "brain-dead", implying that closures would cure it.
- In that case, a far more effective debating technique would be to produce a modern procedural example of that clearly demonstrates its brain-liveliness, and pair it with an up-to-date, unbiased closure example. Add to this a rational, critical evaluation of both against a variety of reasonable, real-world criteria, and you might have something worthy of consideration. Your response to the claim that the procedural example was "brain-dead" was merely to emit the following gem: "Closures are one of those things that look cool on paper, but are difficult to translate into significant practical benefits. They're borderline MentalMasturbation in my opinion." That was better... How???
- I could not do that because I don't believe that closures are inherently broken or bad. They just are not a significant improver of the type of apps I encounter. Why should I go through all that work to demonstrate slight differences, if thats even possible? My main complaint is that people want to gum up languages with yawner features. If you claim they are better than yawner status, then show them kicking ass.
- "...Gum up languages with yawner features..." Ooookay. It's that kind of tight, focused, rational, technical assessment that makes me truly respect your opinions. In other words, you don't even care about closures one way or the other, because you don't need them in your particular domain. They make you yawn. So are you commenting on them just to be argumentative? I don't need graphics for my work on DBMSes, but you don't see me going around claiming that graphics is "borderline MentalMasturbation..."
- I already gave the details. That is merely summary language. If I repeat the details of the debate burden, I'd be violating OnceAndOnlyOnce.
- Your argument has already been quashed. Features not added to languages cannot be tested, therefore features must be added to languages to test them. And with that, I hereby depart this debate. There is nothing to learn from your contributions, and nothing to gain from attempting to counter them. You have proven only that you are an inveterate quibbler of the first water, i.e., a troll. I don't feed the trolls. I have better things upon which to spend my time. Some of those things, you will be pleased to note, are software applications that will be written in object-oriented and functional programming languages, because for some of the solutions I have to deliver, available languages based on those paradigms seem to make my life easier than using any available procedural choices. If you wish to demonstrate how existing pure procedural languages will make my life easier than existing OO and functional programming languages, then I'd be interested to see it. In other words, offer me something positive and I'll listen. Offer me complaints and negative quips in areas where I'm seeing benefits -- which may be indeed be preferential or intangible, but so what? -- then there's no benefit to listening to you. I will be out making money from code while you're lobbing idle criticism and demands. If you think your approaches are better, feel free to compete with me in the open market. Then we'll see who or what wins.
- As far as the "couldn't test", such languages already exist and are thus testable. I'm not against experimental languages; I'm against unproven fads being forced on us. I couldn't show procedural being objectively better because I believe software design is mostly about psychology; not math, science, nor technology. People like what fits their head the best. I don't dispute that your favorite features may fit your head better. Just don't extrapolate that beyond your skull without solid external demonstrations. If you have objective evidence of X being significantly better, please show it. Anecdotes don't cut it around here. As far as calling me names, go [bleep] yourself you evidence-free goon. See also HowCanSomethingBeSuperGreatWithoutProducingExternalEvidence.
I am concerned that some people are getting hung up on the idea that everything needs clear, objective, unassailable and compelling evidence before it gets made widely available.
If the telephone was invented today it would never make it.
- "You mean you intend to put a device on my desk such that at any moment, whenever someone else desires, without consulting me, and possibly when I'm deepest in thought and not expecting it, it will make a loud, unavoidable, unignorable ringing noise?"
I submit that some things need to be widely tried by non-experts and by people other than those who really understand it, to find its strengths and weaknesses, to find where it's useful and where it isn't, and to work out how to make the good bits work better, and the bad bits either irrelevant, improved, or missing.
If you don't think OOP is worth anything, if you don't think closures are worth anything, if you need hard, compelling, objective evidence for everything, then please, offer me hard, compelling and objective evidence that closures are of no real value.
So we throw crap out there and hype it as if its a given, then see what sticks? Generally there's a ratio of roughly about 10 dropped ideas for every 1 that sticks. That means we are wasting 9/10ths of our time on lame fads. That does not seem rational. Nor does it address the psychology issue: some like OO because it fits their head and some don't. Is this the right-handed argument: lefties should suffer because they are the minority? (And OO given more lip service than code service anyhow. People like it in pre-packaged libraries, but their custom app code is still mostly procedural.) It does not appear that you've explored the larger-scale ramifications of your "system" of fad-pumping here.
Let's Go To Goto's Again
I think the spirit of this debate has gotten lost. Let me restate it:
Almost nobody supports going back to the pre-nested block era. Although some support limited forms of goto's, most agree that using mostly nested blocks "improves" software. However, even something this simple and with a universal consensus has no non-psychology-tied evidence to support it. If we cannot provide objective evidence for simple things, then there's almost no hope of providing it for complex claims. What is the reason for this?
My standing theory for this is that software engineering is mostly about human and personal psychology. It's possible an alien species may dig goto's. So far, other than "we're all too new at such proof techniques", nobody has offered a compelling alternative to this Goto-Paradox. --top
NovemberZeroSeven
CategoryDiscussion, CategoryEvidence