Some people on this wiki seem to think that what EwDijkstra says, goes, and make regular use of AppealToAuthority fallacy when presenting arguments they feel are true because Dijkstra says so. I say, over my *#$&%#@!@& dead body.
Dijkstra was a mortal human being, like you and me. That means he was fallible -- he succumbed to opinions, belief systems, and prejudices just as much as he advocated hard logic and symbolic reasoning. Nobody is perfect, not even Dijkstra. I look up to and respect Dijkstra. He was obviously intelligent, was fueled by a desire to see perfection in his profession, and inspired and influenced many advances in our art. Nonetheless, even Dijkstra can be wrong, sometimes.
Okay, agreed, but this page should state some actual examples of where he was wrong instead of just stating he was wrong sometimes. A section here could state where and why he was wrong and on what topics
In the section '5.3. Nested procedures and Dijkstra's display' in Good Ideas, Through the Looking Glass, Wirth writes, "It turned out that the good idea [Dijkstra's display] had aggravated rather than solved the problem."
Some questionable EwDijkstraQuotes:
"It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration."
- Even taken as hyperbole, this quote is hard to reconcile with the many professional programmers who learned Basic first, and the marketplace dominance of languages in the Basic family. Jeff Atwood praised Basic at http://www.codinghorror.com/blog/archives/001104.html.
- Dijkstra's comment was undoubtedly at least somewhat tongue-in-cheek.
"Elegance is not a dispensable luxury but a quality that decides between success and failure."
- It would be nice if Dijkstra offered a way to measure this property that is so relevant to the success of our projects. Does one measure feature cohesion (degree of FeatureInteraction)? SymmetryOfLanguage? OnceAndOnlyOnce? Minimalism? Efficiency? Avoiding AccidentalComplexity? What is 'elegance'? Regardless, in practice it often seems that WorseIsBetter.
- In most cases, what people consider to be elegant code exhibits strong OnceAndOnlyOnce effects. Usually, this manifests in code which exploits an accidental side-effect of another API or CPU instruction. For example, the 6502 lacks a carry-less add instruction. So, to add two numbers, you often prefix your addition with the CLC instruction. But, if you already know the carry flag is clear, as a result of some previous code, you can safely elide that instruction, and save yourself two cycles. The 6502 also has numerous, quite useful actually, undocumented/unsupported opcodes that exploits the nature of the instruction decoder and its internal state machine. Regretably, these opcodes disappeared with the 65C02, and were replaced with other useful functions in the 65C816. Nonetheless, there is no question software utilizing the older undocumented behavior of the 6502 proved quite elegant, even if dangerous, as implementing logic in a single instruction proved far more preferable to the much slower, much uglier, much harder to understand equivalent instruction sequence.
- It seems you are saying that the minimalism and efficiency of addition without the explicit CLC was more elegant than the OnceAndOnlyOnce that might have been achieved by representing the CLC+ADC as an assembly macro or subroutine.
- Yes, I am. Putting CLC/ADC sequence into a macro (quite often done, BTW) resulted in OnceAndOnlyOnce for the source code reader, but it is undeniably not elegant. Instead, that is called well factored, which isn't the same thing. In fact, there is a term custom designed for this solution: brute force (and also QuickAndDirty?; see below). It clearly demonstrates no thought processes were involved in making the software work; it makes no attempt to exploit the side-effects of instructions nor APIs preceding its use. Indeed, elegance serves as the prime directive of any optimization: why should I bother with the CLC when I can prove (with FormalMethods) that carry is clear already? Using another CLC wastes a program byte (important when 4KiB was all you had), a waste of two CPU cycles (important on a CPU running, typically, 4MHz or slower), and a red herring to the assembly listing reader, saying that the carry may be set upon entry to the addition code. Inarguably, your most elegant solutions to problems always come from (perhaps artificially) resource-constrained solutions to a problem -- they always solve a problem with a minimum of computation steps, a minimum of resource consumption, or both. Indeed, we could not have our plethora of data structures were it not for the eternal search for elegance.
- Additionally, don't think this kind of logic still doesn't apply today; garbage collected language implementations are continuing in their search for zero-cost read- and write-barriers, particularly with respect to generational collectors. In software which utilizes a lot of slot updates, the difference between a 3-instruction barrier and a 2-instruction barrier can mean the difference between rewriting a chunk of code in C++ for speed and it being fast enough as-is. And, if the compiler can prove that an object is already flagged for subsequent a mark-phase, it can elide the write-barrier all-together. This isn't any different than proving I can elide a CLC from a CLC/ADC sequence. In this case, the more elegant the resulting compiled code, the higher the performance thereof.
"If 10 years from now, when you are doing something quick and dirty, you suddenly visualize that I am looking over your shoulders and say to yourself 'Dijkstra would not have liked this', well, that would be enough immortality for me."
- Quick and dirty can sometimes work. The iterative approach of AgileDevelopment can be better than the careful design upfront that Dijkstra encouraged.
- AgileDevelopment actually is very careful design -- it merely happens to be more interactive in nature, thus tricking the programmer into thinking it's all quick-and-dirty. That's precisely why it works.
- "Quick and dirty", in this case, might be a euphemism for "inelegant", which is distinct from the development methodology.
"The go to statement should be abolished from all 'higher level' programming languages (i.e. everything except -perhaps- plain machine code.)"
- goto is used frequently in large C codebases, such as the Linux kernel. (See http://kerneltrap.org/node/553/2131)
- And not so large codebases; it's excellent for handling errors in a structured manner without resorting to the ArrowAntiPattern. - Isn't that what GuardClauses are for -- AaronRobson
- used frequently! LOL LOL. It's used INFREQUENTLY, not frequently. It's used mostly to emulate exception handling.
- Should C be considered a 'higher level' language, or (almost) a generic assembly language?
- Considering the spectrum of languages available today, one should not think in terms of high versus low level languages. Instead, relative comparisons prove far more apt: yes, C is absolutely a higher level language, particularly when compared against the other languages available when Djikstra issued his proclamation. However, compared to Haskell, it's also essentially a generic assembly language. Forth and Lisp are both metalanguages; they each can be as low or as high a level of abstraction as you wish them to be. Nonetheless, I doubt you'll find many Forth aficionados who refute that Lisp is higher level than Forth, at least out of the box.
Possibly one of the most annoying things about Dijkstra is that he never worked on commercial projects. It is easy to sit on the sidelines, insist on perfection and decide exactly how much scathing criticism to heap upon people that are genuinely making an effort to help people within the limits of the tools, education and social agreements (with business, BAs, QAs, and other devs) that apply in their circumstances. Many other famous computer scientists (for example RobinMilner) actually held programming jobs at some point during their lives. Also, contrary to what you might have thought from reading Dijkstra: working and making money aren't innately bad things.