Practitioners Reject Formal Methods Discussion

Please read the original here:

The original original is here: Discussion follows:


We should clarify what is meant by "practitioner". It could be interpreted as a "non-academic", or somebody who uses existing tools to build domain-specific tools/apps Contrast this with a systems-software developer who builds compilers, chip microcode, database engines, etc.


List of Possible Reasons (without a value judgment):

(Please place objections below, at least long ones. A brief pro/con is acceptable.)


He is dissing practitioners, just as FabianPascal disses practitioners. In other words - the industry marketing hype will always win over science or math. I think he is making a pessimistic (maybe even humorous) comment about the industry (the practitioners).


In many cases, hackers are professionals (the word professional could be considered buzzy, though). Good professionals and good hackers are formal (and value security, integrity, etc). Consider the OpenBsd project where professional BSD hackers value security, correctness, code audits. Being a bad hacker or a an incompetent professional isn't so unpopular, unfortunately.


I hate to break it to people - but as usual, most of you are misinterpreting what an intelligent man was getting across. He is not saying that it is a good thing or something we should praise or take part in (or try and find more points to justify this). Rather he is saying that "practitioner" is just another nice word for idiot. He's saying it in a nice way, almost a hidden message for you to find.

Unfortunately - some of you, are not getting it - which is the very reason that writing without direct straightforward clear wording (and even insults) isn't always so effective. It leaves a lot to be misunderstood when we aren't direct. Dikjstra is kind of indirecct in his writing, in a very humorous way. And unfortunately, a lot of folks don't get his humor, either. I have noticed this quite often, actually - that people who write indirectly rather than spelling it right out - will get misinterpreted. In many of his writings he also bashes certain programming languages, such as Lisp - and I've seen Lisp people completely misinterpret what he's said and post blogs about how Dijkstra thinks Lisp is wonderful, yada yada. It is sad - and we must read his writings very very carefully.

Writers cannot win. They've lost. When writers spell everything out directly, the readers feel insulted. When writers use hints and write indirectly - people misinterpret and go on to make up falsities. Writers, near give up - due to this.


What is the reason that a hacker is separately distinguished from a professional? Competent hackers are considered professionals.

Why don't you look up the 1992 pre-Internet-boom meaning of the word and find out?

Here is a funny quote anyway:

Why is that funny? The hacker culture tends to deliberately be an 'oral' culture, doesn't it, valuing exactly that sort of interpersonal exchange of information rather than relying on formal third parties? (who may be removed from the realities of the job and deal in abstractions that don't actually work) Seems to me the question Dijkstra *should* have asked in this context, rather than simply mocking, is: What did the hacker *understand* the term 'spiral methodology' to mean, and was that in fact the original definition of the term? Was information lost in transit, or not? And if the 'hacker culture' sense of the term meant something different, then was their term better or worse, more accurate or less accurate, more useful or less useful as a methodology, than the formal definition?

It is funny because some hackers pass around information as pseudo science. For example, if some elite hacker like PaulGraham says that Lisp is a GoldenHammer and writes a book about how programming is actually an act of painting and art (rather than science), a lot of other hackers will immediately follow his lead - without doing further research. The spiral methodology was valid and formal - but the hacker didn't actually do any research on what it was - he just assumed the other hacker that passed on the information, must have been right. Hackers, sometimes pass around magic - and treat it as some sort of wizardry instead of rigorous science and research. Sometimes wizardry is mixed and matched with a bit of science and research, especially in academic communities like MIT, AI labs, etc

Now that I know the reputations of many hackers, I'm going to refrain from using the term hacker to describe a competent programmer - which I used to do. People such as ESR, Stallman, and Paul Graham, and Dijkstra have simply proven to me that hacking is really not a positive phrase at all - it is a derogatory phrase for people that like to diddle around and muck with things half-scientifically. I recall the label of a back yard mechanic being a hacker when he couldn't do the head gasket properly - someone who strips and rounds the heads off bolts. The word hacker has lost its reputation ever since it was invented - even its true definition (not cracker) is not a good one. I use to think hacker was cool - now I will simply disregard and refrain from using it when referencing competent programmers. Oh, how snobbish.


The paper seems to be ignoring the fact that TheMapIsNotTheTerritory. I've built what I thought were wonderful abstractions that simplified programs only to encounter exceptions either due to not understanding the domain or due to later changes that went off in unexpected directions. EightyTwentyRule is the Great Godzilla of practical abstractions. Compact models are not always flexible models. The author seems to focus more on making a compact model so that formal analysis can be done than making it flexible. Perhaps we can somehow have our cake and eat it too, but I am skeptical. Crystal balls are hard to come by. Half-assed abstractions are more flexible than full-blown in my experience (HelpersInsteadOfWrappers). Excess abstraction ties you to too many large-scale wide-reaching assumptions. Partial or webs of smaller abstractions are more likely to be useful after unexpected changes come along. You can often keep many of the small parts when you have to rework the contraption. Tighter abstractions have bigger DiscontinuitySpikes. --top

{Do note that abstraction is just one formal method among many.}

Were you creating abstractions in the software tools domain -- like abstracting screen widgets or forms or reports or some calculation process such as a product costing system -- or were they abstractions of elements of the business domain? The former is readily abstractable, and "full abstractions" (if I understand what you mean by that) tend to be more flexible and powerful than "half-assed abstractions" (again, assuming I grok the meaning of that phrase). The latter (i.e., business rules) tend to be highly amorphous, changeable, rife with inelegant exceptions, and hence not readily abstractable using a simple model. Economists build careers on trying to formally model such systems, without much success. -- DaveVoorhis

I don't think they are always that separable. There tends to be a certain "flavor" to a given business such that you design or encounter a certain kind of report/form or report/form interface, for example, and it becomes a kind of internal standard. This is both for consistency of UI and to better fit the biz. Some of this custom design is merely out of historical habit and some is because of the nature and style of the domain itself. For example, a company that values paper because of frequent field visits to environments hostile to mobile equipment will tend to have paper-centric tools as apposed to screen-centric tools like HTML output. Younger employees are also less paper-centric than older ones such that the generational composition of the company can also dictate or guide design. And those who use lots of Mac's may prefer UI conventions that tend to mirror Mac conventions instead of Windows conventions. Some companies are used to lots of abbreviations ("codes") that can be direct keys, while others prefer longer descriptions tied to internal ID's that are usually not visible to the user. The second requires fancier "lookup" approaches. --top

Yes, I recognise the problem. I still find small-scale but complete abstractions effective within the context of broader, un-abstractable systems. This is especially (but not exclusively) true for software tools -- generic forms and screen widgets, for example, or well-defined systems of calculations -- even if domain-flavoured. It's a matter of looking for abstractions, and developing a habit of abstraction-oriented thinking. E.g., "what is the most generic mechanism that will easily support all the requirements of this component", or "is this collection of rules a specific example of some provably-general case", or "is there an algebraic way of manipulating instances of this concept?" Then the abstractions, and even the process of abstracting, becomes valuable:

{One should only abstract over invariants. Abstracting before you have either verified an invariant or intentionally created one via case-analysis is often a PrematureAbstraction. Invariants are very easy to find in domains where TheMapIsTheTerritory or where TheMapBecomesTheTerritory... which happens to include the vast majority of internal-software domains, and even systems domains (including device-drivers - invariants across known classes of hardware, such as all modern monitors project a 2D array of pixels, and all modern printers are similarly limited to 2D documents). These are not domains with which TopMind has much experience; IIRC, he typically works directly on the narrow but very important border directly between human policy and machines, doing reports and HCI and working with business policy. For him, almost every abstraction he attempts will later be demolished by some ugly exception - a variant that ultimately results in rework to undo the exception. And such rework is painful, and a little humbling; nobody with an arrogant or lazy personality would abide it very often (see LazinessImpatienceHubris).}

{This results in Top's great wariness towards abstractions - one he would (likely) not possess had he a wide enough range of experiences across different niches to recognize where abstractions will work and where they will not.}

{It is, I suppose, similar to how most experienced ObjectOriented programmers eventually start shunning business-domain objects despite their horrid prevalence in the educational materials. In my observations, experts use computations, calculations, values, functors and serializers and observers and subscribers and other patterns and generic collections; these become the 'objects'. I am not a fan of ObjectOriented, mostly because it is neither named nor taught well for effective use - we should rename it 'ComputationOriented?' and burn every educational book that has 'duck' or 'cow' or 'animal' listed as an object. I'll stick with identifiers and propositional facts - computation-oriented things.}

{That said, I'm surprised Top still abides by EntityRelationshipModeling. Even the 'Entity' is a significant 'abstraction' that often needs changing. Narrow-table 6NF solutions cast aside this abstraction in favor of finer-grained flexibility, resulting in even less abstraction than the wide-table ER models. I can only assume this is the one of the few places Top has managed to obtain sufficient victory over the business and policy domains to feel comfortable making the 'entity' abstraction. -- DavidBarbour?}

He abides by it, no doubt, because there are few or no practical alternatives. I wouldn't trust the company payroll records to any current implementation of DataLog. What does that leave? Oracle, MS SQL Server, PostgreSQL, etc., for which implementing the ultimate normal forms are awkward at best, unusable on average, and EntityRelationshipModeling is the standard design process. Fortunately (?), the majority of application-development and record-keeping infrastructure is only deprecated by EntityRelationshipModeling (and all that it implies) a little bit; for almost all organisations, the company information systems are effectively (though arguably not ideally) modeled using EntityRelationshipModeling and managed with a RelationalDatabase of conventional (i.e., table-oriented or relvar-oriented) design. That doesn't mean such systems shouldn't, and won't, eventually be replaced with something that facilitates fact-recording and fact-manipulation -- in a manner that equally supports conventional information systems and novel or typically non-relational applications -- but it does mean there's considerable inertia against it. -- DV

{Woe unto us and our limited tools that both create 'convention' and bind us to it. It'd be nice if someone would fix those... someone with time to fix them and the money to fight inertia, that is. I certainly agree that no existing implementation of a DataLog RDBMS would a good choice. And, yes, if we did suddenly change to thin tables, it wouldn't deprecate existing applications much - those could easily use views and an SQL translation layer. Whether this is 'fortunate' depends on whether you want people to start thinking differently vs. the benefits of slipping in behind the scenes without need for excessive transition costs or fanfare (celebrity death match: idealism vs. economics). Major advertising points would be on the advanced use of meta-data, improved security, and improved applicability towards advanced data-mining. -- db}

Patience... I'm working on it as fast as I can. As for implementing thin tables today, the applications wouldn't notice *much* difference -- except throughput could decline due to a greater volume of joins, and many apps may have to be re-written to get around various DBMS product limitations in updating through views. Yes, layers can be introduced that can mediate this somewhat, but a cost-benefit analysis on performing such an alteration to the company payroll system will rarely work out in favour of doing it. -- DV


RE: {Woe unto us and our limited tools that both create 'convention' and bind us to it. [...]}

Conventions are not necessarily bad. There are often multiple ways to do things and one choice over the other may have little if any difference in benefits. Thus, it becomes an internal standard and future things that follow that internal standard will be easier to pick up. In QwertySyndrome the benefits of familiarity are compared to the benefits of overhauling. Some argue that newer keyboards don't provide a significant enough advantage (for non-keying-athletes) to bother dumping the standard. --top

{Conventions are good insofar as they become de-facto 'standards' and standards are often good because they reduce systems-interfacing costs. OTOH, it is useful to remember that standards by convention are often extremely premature, and they produce a nice big wall to leap, or otherwise create baggage in the form of backwards-compatibility, when we finally possess the experience to know how to do 'it' (whatever 'it' is) better. It may be, however, that they are simply not something we can avoid; fundamentally, when pioneering a new field, need to start doing something before we can learn about how it can be done any better.}

{Consider QwertySyndrome - if someone came up with an objectively much better keyboard (don't mind the 'how' for the moment), would it be worth forcing children to use the Qwerty keyboard just to support 'familiarity' of the tiny percent of all users? And by 'tiny percent' I'm noting that children and children's children and so on would all be forced to use Qwerty just because, each time, that just happened to be what the previous generation was 'familiar' with. Or would it be better to create a transition plan, whereby the old fogies can continue doing as they have been, but convention isn't forced to become 'tradition' by the tyranny of a minority with power to make such decisions?}

{Logically and ethically speaking, MindOverhaulEconomics doesn't seem to be a valid reason to hand off our inefficiencies to our children then educate them to do the same. I believe we have some responsibility to fight convention and shift inertia, despite the 'immediate' inconvenience when, in those rare cases, we can see or find a better way to do things. To do anything else seems... childish and irresponsible, like a person who demands freedom but isn't willing to fight for it.}

{It's worth noting that LeanSixSigma, CMMI, and various business and manufacturing philosophies that have been successful seem to agree.}


I can't be sure, but most contributors to this page seem to be using the term "Formal Method" in a sense entirely different from that of EWD. I wonder how many of you have read much of EWD's writings to see what he means by FormalMethods, before then claiming (perhaps implicitly) to be writing about them. Perhaps I'm wrong, perhaps you have read his writings, perhaps you are using the term in the same sense, perhaps you have taken on board what he was advocating, but it really doesn't sound like it.

I would note that the topic title doesn't appear to limit the discussion to merely the article's definition. However, if there are different "kinds" and it makes a difference, perhaps we should give working names to those kinds.

{Speaking for myself, I know what FormalMethods are and I know I've entirely deviated from the article and essentially from the topic of this page. The discussion belongs on AbstractionsAreGood?, or something.} -- DV

{I haven't read many of EWD's papers, but I (via other vectors) understand FormalMethods broadly to describe any approach to design, coding, and verification thereof through formal (and preferably automated) mechanisms, including especially logic and mathematics. The goal is generally to both describe and prove desirable invariants (since that's really the only thing you can prove), though such invariants may be temporal in nature (e.g. "this function/variable never decreases"). Automated use of type-systems and formal abstractions are examples of FormalMethods at the software scale, and DesignByContract or HoareTriples would also be fine examples of approaches that lend well to FormalMethods. Where error and fault are to be tolerated or graceful degradation is to occur, stochastic proofs along with the operating assumptions will be necessary. In practice, progams written with FormalMethods oft possess 'negative' specifications ('X must not occur ever' vs. 'X must occur when Y'), which makes difficult any sort of myopic DoTheSimplestThingThatCouldPossiblyWork (since you need to do the simplest thing that can be demonstrated to not violate existing constraints), which makes it somewhat anti-ExtremeProgramming and earns FormalMethods a bit of a (not entirely undeserved) reputation as being rather cumbersome. If my understanding seems far off of what you believe to be EWD's understanding, feel free to explain to me the apparent differences, or point me to a paper where EWD makes a definition of 'FormalMethods' that he then sticks to. -- db}


But that doesn't distinguish between informal abstractions and formal ones.

{Informal abstractions are also good... when applied to informal purpose. What else do you think 'dog', 'cow', 'duck', and even 'animal' are? They are necessary for KnowLedge, which is (generally) lossy semantic compression of information which is necessary to deal with limited storage and processing capabilities in ALL computation systems (including our brains). Plus they are useful for communications via shared vocabulary with like-minded individuals, albeit at risk of confusion (since informal abstractions possess, by nature, no formal representation for purpose of communication, and often are defined ostensively which creates much opportunity for vagueness and ambiguity).}

{Any formal abstraction will be a formal predicate, which represents a proposition with variable-slots (which may then be instantiated by filling those slots, or otherwise described with universal or existential or other quantifiers). It doesn't always correspond to a 'type', of course; in computation-theory any formal pattern is also a formal abstraction (and vice versa, 'pattern' and 'abstraction' both being words with a rather broad scope and no formal distinction). The question in practical systems becomes whether the language supports a given abstraction or pattern or class thereof, or whether the programmer will need to develop his own buggy, slow, 80% solution to support it (see AreDesignPatternsMissingLanguageFeatures). -- db}


PageAnchor: "can-opener"

Moved story to AssumeCanOpener.


See also: TheoreticalRigorCantReplaceEmpiricalRigor, AgainstMethod, CustomBusinessApplicationDefinition (biz human nature)


AprilZeroEight

CategoryFormalMethods


EditText of this page (last edited June 12, 2013) or FindPage with title or text search