Please read the original here:
The original original is here:
Discussion follows:
We should clarify what is meant by "practitioner". It could be interpreted as a "non-academic", or somebody who uses existing tools to build domain-specific tools/apps Contrast this with a systems-software developer who builds compilers, chip microcode, database engines, etc.
List of Possible Reasons (without a value judgment):
- Formal methods often require a lot of time-consuming detail for which the payoff is small or perceived as small. It may be perceived as GoldPlating or SafetyGoldPlating. For example, an estimated 60% increase in labor may not justify a 20% reduction in bugs according to managers.
- Extreme programming methods often require a lot of time consuming detail such as testing, integration, cards, finding a pair programmer. So? Programming takes time. Specifications and proofs take time too. Errors to be fixed after the software is written costs a lot of time.
- You are misunderstanding the man. He is not praising or asking you to find ways to justify why practitioners should continue to idiotically rule the industry. He is making a statement about the poor state of the industry, similar to how FabianPascal does.
- Investment practices often focus on an approximately 18-month turn-around such that longer-term benefits are discounted (FutureDiscounting). Some argue that IT should be exempt, others say its just as important for IT.
- It may be difficult for the practitioner to distinguish between charlatans or MentalMasturbation and legitimate ideas. This may be either because the evidence is flawed or because the practitioner does not understand the evidence.
- You are doing exactly what he didn't want you to do: misinterpret his article to mean we should now try and justify why we should avoid science, form, and math.
- This is merely to list possible reasons, valid or not.
- There is no shortage of claimants of GoldenHammers, and the practitioner would rather wait for somebody else to test new ideas rather than take the risk themselves.
- You are doing exactly what he didn't want you to do: misinterpret his article.
- Such techniques or tools may dismiss our down-play psychological factors because they are difficult to quantify and analyze (SovietShoeFactoryPrinciple). Elegant math or elegant underlying ideas may not necessarily match well with the psychology of the concept user(s). Formal method proponents often say the "user" should change their thinking to fit the (allegedly) elegant concept, while practitioners feel it should be the other way around (MindOverhaulEconomics).
- Misinterpreting his article.
- Sub-topic is no longer about article. A split was made in the topic.
- The formalist and the practitioner may be optimizing for different goals.
- A good formalist is a practitioner. Theory and form are practiced. That's part of the clash, and part of the point being made. Form and theory are supposed to be practiced out - they are not two separate political parties. For example, consider a formalist who has real world products that he produces by practicing his form and specifications. This idea that the formalists are in a war against the other people (who are they? Informalists) is completely ludicrous - and is exactly the problem of society and programmers today.
(Please place objections below, at least long ones. A brief pro/con is acceptable.)
He is dissing practitioners, just as FabianPascal disses practitioners. In other words - the industry marketing hype will always win over science or math. I think he is making a pessimistic (maybe even humorous) comment about the industry (the practitioners).
In many cases, hackers are professionals (the word professional could be considered buzzy, though). Good professionals and good hackers are formal (and value security, integrity, etc). Consider the OpenBsd project where professional BSD hackers value security, correctness, code audits. Being a bad hacker or a an incompetent professional isn't so unpopular, unfortunately.
I hate to break it to people - but as usual, most of you are misinterpreting what an intelligent man was getting across. He is not saying that it is a good thing or something we should praise or take part in (or try and find more points to justify this). Rather he is saying that "practitioner" is just another nice word for idiot. He's saying it in a nice way, almost a hidden message for you to find.
- Indeed; Lars has a page somewhere on his z505 site which actually (amazingly!) reflects my feelings on formal methods perfectly (although I would most definitely have written it differently). If memory serves me correctly, I think it involved the author of Bittorrent saying that FM is pain, doesn't work, BDUF, blah blah, but yay, go unit testing! In my work, at least, I often have occasion to use both. Very often, I'll transcode something into fake Haskell, affect my changes by rewriting the resulting equations, attempt to derive invariants and prove some things inductively, and those invariants and proofs serve as the basis for a set of unit tests for the new code being implemented. Then, I'll transcode back to Python, C(++), or whatever. I've found it works extremely well, and doesn't take that long to do. OK, it may not be completely formal, but it's formal enough to get the job done, and I've eliminated a large number of bugs from programs just through manual type checking. I cannot speak highly enough of the system. I blogged briefly about the process on my site: http://tinyurl.com/4htzws --SamuelFalvo?
Unfortunately - some of you, are not
getting it - which is the very reason that writing without direct straightforward clear wording (and even insults) isn't always so effective. It leaves a lot to be misunderstood when we aren't direct. Dikjstra is kind of indirecct in his writing, in a very humorous way. And unfortunately, a lot of folks don't
get his humor, either. I have noticed this quite often, actually - that people who write indirectly rather than spelling it right out - will get misinterpreted. In many of his writings he also bashes certain programming languages, such as Lisp - and I've seen Lisp people completely misinterpret what he's said and post blogs about how Dijkstra thinks Lisp is wonderful, yada yada. It is sad - and we must read his writings very very carefully.
Writers cannot win. They've lost. When writers spell everything out directly, the readers feel insulted. When writers use hints and write indirectly - people misinterpret and go on to make up falsities. Writers, near give up - due to this.
What is the reason that a hacker is separately distinguished from a professional? Competent hackers are considered professionals.
Why don't you look up the 1992 pre-Internet-boom meaning of the word and find out?
Here is a funny quote anyway:
- "I met a hacker last year who claimed that he was following a so-called "spiral methodology". I quizzed him on what he’d read of the spiral approach. Not a syllable! He’d picked up the term from another hacker, who’d picked up the term at a conference, perhaps from another hacker — who knows!" --Dijkstra
Why is that funny? The hacker culture tends to deliberately be an 'oral' culture, doesn't it, valuing exactly that sort of interpersonal exchange of information rather than relying on formal third parties? (who may be removed from the realities of the job and deal in abstractions that don't actually work) Seems to me the question Dijkstra *should* have asked in this context, rather than simply mocking, is: What did the hacker *understand* the term 'spiral methodology' to mean, and was that in fact the original definition of the term? Was information lost in transit, or not? And if the 'hacker culture' sense of the term meant something different, then was their term better or worse, more accurate or less accurate, more useful or less useful as a methodology, than the formal definition?
It is funny because some hackers pass around information as pseudo science. For example, if some elite hacker like PaulGraham says that Lisp is a GoldenHammer and writes a book about how programming is actually an act of painting and art (rather than science), a lot of other hackers will immediately follow his lead - without doing further research. The spiral methodology was valid and formal - but the hacker didn't actually do any research on what it was - he just assumed the other hacker that passed on the information, must have been right. Hackers, sometimes pass around magic - and treat it as some sort of wizardry instead of rigorous science and research. Sometimes wizardry is mixed and matched with a bit of science and research, especially in academic communities like MIT, AI labs, etc
Now that I know the reputations of many hackers, I'm going to refrain from using the term hacker to describe a competent programmer - which I used to do. People such as ESR, Stallman, and Paul Graham, and Dijkstra have simply proven to me that hacking is really not a positive phrase at all - it is a derogatory phrase for people that like to diddle around and muck with things half-scientifically. I recall the label of a back yard mechanic being a hacker when he couldn't do the head gasket properly - someone who strips and rounds the heads off bolts. The word hacker has lost its reputation ever since it was invented - even its true definition (not cracker) is not a good one. I use to think hacker was cool - now I will simply disregard and refrain from using it when referencing competent programmers. Oh, how snobbish.
- The really funny thing about that quote is it's actually from a different Dijkstra altogether, Sidney.
The paper seems to be ignoring the fact that TheMapIsNotTheTerritory. I've built what I thought were wonderful abstractions that simplified programs only to encounter exceptions either due to not understanding the domain or due to later changes that went off in unexpected directions. EightyTwentyRule is the Great Godzilla of practical abstractions. Compact models are not always flexible models. The author seems to focus more on making a compact model so that formal analysis can be done than making it flexible. Perhaps we can somehow have our cake and eat it too, but I am skeptical. Crystal balls are hard to come by. Half-assed abstractions are more flexible than full-blown in my experience (HelpersInsteadOfWrappers). Excess abstraction ties you to too many large-scale wide-reaching assumptions. Partial or webs of smaller abstractions are more likely to be useful after unexpected changes come along. You can often keep many of the small parts when you have to rework the contraption. Tighter abstractions have bigger DiscontinuitySpikes. --top
{Do note that abstraction is just one formal method among many.}
Were you creating abstractions in the software tools domain -- like abstracting screen widgets or forms or reports or some calculation process such as a product costing system -- or were they abstractions of elements of the business domain? The former is readily abstractable, and "full abstractions" (if I understand what you mean by that) tend to be more flexible and powerful than "half-assed abstractions" (again, assuming I grok the meaning of that phrase). The latter (i.e., business rules) tend to be highly amorphous, changeable, rife with inelegant exceptions, and hence not readily abstractable using a simple model. Economists build careers on trying to formally model such systems, without much success. -- DaveVoorhis
I don't think they are always that separable. There tends to be a certain "flavor" to a given business such that you design or encounter a certain kind of report/form or report/form interface, for example, and it becomes a kind of internal standard. This is both for consistency of UI and to better fit the biz. Some of this custom design is merely out of historical habit and some is because of the nature and style of the domain itself. For example, a company that values paper because of frequent field visits to environments hostile to mobile equipment will tend to have paper-centric tools as apposed to screen-centric tools like HTML output. Younger employees are also less paper-centric than older ones such that the generational composition of the company can also dictate or guide design. And those who use lots of Mac's may prefer UI conventions that tend to mirror Mac conventions instead of Windows conventions. Some companies are used to lots of abbreviations ("codes") that can be direct keys, while others prefer longer descriptions tied to internal ID's that are usually not visible to the user. The second requires fancier "lookup" approaches. --top
Yes, I recognise the problem. I still find small-scale but complete abstractions effective within the context of broader, un-abstractable systems. This is especially (but not exclusively) true for software tools -- generic forms and screen widgets, for example, or well-defined systems of calculations -- even if domain-flavoured. It's a matter of looking for abstractions, and developing a habit of abstraction-oriented thinking. E.g., "what is the most generic mechanism that will easily support all the requirements of this component", or "is this collection of rules a specific example of some provably-general case", or "is there an algebraic way of manipulating instances of this concept?" Then the abstractions, and even the process of abstracting, becomes valuable:
- It promote re-use, because abstractions may be used in multiple circumstances.
- It promotes extension; in particular, software algebras -- a collection of operators without side-effects that accept immutable values as parameters and return immutable values -- are easily extended. The RelationalAlgebra or ExtendedSetTheory, when implemented, are good examples.
- It typically makes the code clearer. The exception cases, when they occur, stand out clearly in an otherwise elegant (and, hence, often simple) structure. -- DV
- You seem to be talking about "root tools" or the atoms of abstraction, not domain abstractions. Related: GenericBusinessFrameworkUnobtainable. Further, it's best in my opinion to enable power users instead of having highly-educated developers micro-manage most business logic. Something too esoteric from a power-user perspective may not fit that purpose well. See CompilingVersusMetaDataAid. -t
{One should only abstract over invariants. Abstracting before you have either verified an invariant or intentionally created one via case-analysis is often a
PrematureAbstraction. Invariants are very easy to find in domains where
TheMapIsTheTerritory or where
TheMapBecomesTheTerritory... which happens to include the vast majority of internal-software domains, and even systems domains (including device-drivers - invariants across known classes of hardware, such as all modern monitors project a 2D array of pixels, and all modern printers are similarly limited to 2D documents). These are not domains with which
TopMind has much experience; IIRC, he typically works
directly on the narrow but very important border directly between human policy and machines, doing reports and HCI and working with business policy. For him, almost every abstraction he attempts will later be demolished by some ugly exception - a variant that ultimately results in rework to undo the exception. And such rework is painful, and a little humbling; nobody with an arrogant or lazy personality would abide it very often (see
LazinessImpatienceHubris).}
{This results in Top's great wariness towards abstractions - one he would (likely) not possess had he a wide enough range of experiences across different niches to recognize where abstractions will work and where they will not.}
- I find such talk offensive. It is ArgumentFromAuthority. If you have/know grand business abstractions, please show them instead of claim such in a patronizing way. And I am not against abstractions. I use plenty of abstractions. However, they tend to lite and local (See HelpersInsteadOfWrappers). Formal, grand abstractions turn out to be the problem. The emphasis on invariants is lost cause because the future is not predictable enough. Anything is fair-game for chance. However, we can use experience-based estimates to tilt the design toward the most-likely estimated changes. --top
- Tables are not "lite and local" - they are wrappers around several lower level Cee calls and btree algorithms and such. Debating over what is a "helper" and what is a "wrapper" leads to LaynesLaw.
- They can be lite and local if the tool supports it. But, I wasn't necessarily talking about tables, but more about abstractions that one as a developer can create and change for a given app/shop. A "practitioner" in other-words. Although I won't claim any hard border, in general helpers don't necessarily strive to hide internals. They simplify common tasks or reduce the repetition involved, but are not really about "hiding". They are mostly about OnceAndOnlyOnce factoring, not black-boxing.
- No one claimed that you were talking about tables. The very point, in fact, is that you were not talking about tables - which is why it was brought up. Stand back for a minute - looking down from third person view at yourself. I think you kind of are missing the point. You are bending the arguments against yourself. When someone brings up the fact that tables are large abstractions, you simply claim that oh but I wasn't talking about tables. When someone brings up the fact that scripting languages are large abstractions, you might even claim that oh but I wasn't talking about languages. Uhm, look at your own dogfood. What do you see?
- If there is miscommunication about context, why be eager to blame somebody? Merely point out the miscommunication without attaching a value judgment (such as "author is evil"). I agree that different domains may have different abstraction patterns. We need to figure those out and document them to avoid such unstated context problems.
- {Tables are already a grand abstraction, but if you're looking for me to provide 'business' abstractions then you obviously didn't actually read what I wrote above. You use abstractions, but you're quite wary towards them, and you even indicate so above - no 'authority' is involved. And as far as you finding it offensive: that's perfectly okay with me - I fully believe in offensive truths and CriticizeBluntly, so the only way I'd recant is if I came to understand I was in error. As far as your claims: "the future is not predictable enough" and "anything is fair-game for chance" - these are not claims you can justify in the general case. You just happen to be working in a field rather capricious to human whim, politics, and policy.}
- CriticizeBluntly does not work. It is merely testosterone-induced aggressiveness, not the proper way to go about influencing others. (And I admit I slip into reptile mode also at times when frustrated.) But perhaps there is a misunderstanding here. I interpreted the above as saying that I would be able to find significantly better abstractions in my *own* niche if I had experience with more niches. But I realize it may also be interpreted as saying that I would not be down on "invariant" abstractions if I had more niches under my belt. The second it less offensive, and something I might even agree with. It could be worded a little better to avoid such confusion. As worded, it seems to imply that I hate abstraction in general. --top
- Further, if formal methods are not applicable to biz apps, that is important info. --top
- {Your second interpretation seems the one most accurate to my intent. Feel free to refactor the above to subtract all the 'testosterone-induced aggressiveness' you want. As far as biz-apps go, I believe that formalizing 'business rules' and 'business policies' themselves will (eventually) serve well. All such rules (along with all their messy exceptions and heuristics) would be 'centralized' to database and code would either be generated from them (DeclarativeMetaprogramming) and/or they would be used directly (LogicProgramming). Compare to the current situation where you 'express' rules an policies in a manner that is distributed throughout the application-space (one big messy if-then after another), and it is consequently difficult to locate, examine, tweak, maintain, and test these rules and policies. OperatingSystem rules and policies for security, resource-management, etc. have very similar characteristics, albeit with a greater tendency to calcify due to both their larger codebases and the fact that OperatingSystems are platforms for other people's software and breaking compatibility is a big no-no. Ideally, we could just translate the rules and policies into a decision-logic, drop them into some sort of database, and have them applied automatically to service-decisions and resource management and such. It would even be easier to supervise and work out the bugs: one of the big advantages is that you could trace every decision down to the exact rules and data that came together in making it. However, while this approach has much promise to serve well in your field and my own, it has not yet reached a position (in terms of optimization, prototype applications to very large systems, etc.) that I can recommend it without considerable supplementary programming. In any case, the basic point is that while it may be difficult for you to find many invariants, the basic idea of 'business rules' and 'business policies' really does seem to fit the bill for 'invariant' - we just haven't figured out how to mechanically take advantage of that probability.}
- The "testosterone" comment was directed at those having the CriticizeBluntly viewpoint, not just a particular sentence. Anyhow, regarding organizing biz rules, I doubt there is a GoldenHammer approach. LifeIsaBigMessyGraph and my past attempts to force order on things that are only half-ordered (EightyTwentyRule) have often proven problematic. Biz rule metabases were also tried in the 80's with lackluster results. There may be room for incremental improvements, but one has to be careful not to make it too foreign to developers. Related: SeparationAndGroupingAreArchaicConcepts, BusinessRulesMetabase. --top
{It is, I suppose, similar to how most experienced
ObjectOriented programmers eventually start shunning business-domain objects despite their horrid prevalence in the educational materials. In my observations, experts use computations, calculations, values, functors and serializers and observers and subscribers and other patterns and generic collections; these become the 'objects'. I am not a fan of
ObjectOriented, mostly because it is neither named nor taught well for effective use - we should rename it 'ComputationOriented
?' and burn every educational book that has 'duck' or 'cow' or 'animal' listed as an object. I'll stick with identifiers and propositional facts - computation-oriented things.}
- There was a time when popular ComputerScience textbooks focused on such foundational elements. Alas, the vocational push of the last twenty years or so (maybe longer) in higher education has largely shoved aside such "theoretical" works in favour of "how to" texts, and what is largely computer language instruction in the classroom. Not that I'm deprecating practical skills -- they are necessary, and should be taught on equal footing with sound theory. I am deprecating the fact that students graduate seeing the world almost entirely through ObjectOriented-tinted glasses. (Plus a smattering of SQL and the RelationalModel, taught laboriously and tediously to ensure the students will always leave databases in the background.) This may make them usable coders, but it seriously limits their ability to innovate in the field. (This, of course, has a direct economic impact on any country with such an "education" system -- lest anyone think there isn't a pragmatic argument in favour of genuine education instead of vocational training.) And, yes, virtually every OO book on the market, or that has ever been on the market, is dire. I've not seen one yet that had examples that are anything like expert OO coders create, except (sometimes) for simulations. -- DV
{That said, I'm surprised Top still abides by
EntityRelationshipModeling. Even the 'Entity' is a significant 'abstraction' that often needs changing. Narrow-table 6NF solutions cast aside this abstraction in favor of finer-grained flexibility, resulting in
even less abstraction than the wide-table ER models. I can only assume this is the one of the few places Top has managed to obtain sufficient victory over the business and policy domains to feel comfortable making the 'entity' abstraction. -- DavidBarbour
?}
He abides by it, no doubt, because there are few or no practical alternatives. I wouldn't trust the company payroll records to any current implementation of DataLog. What does that leave? Oracle, MS SQL Server, PostgreSQL, etc., for which implementing the ultimate normal forms are awkward at best, unusable on average, and EntityRelationshipModeling is the standard design process. Fortunately (?), the majority of application-development and record-keeping infrastructure is only deprecated by EntityRelationshipModeling (and all that it implies) a little bit; for almost all organisations, the company information systems are effectively (though arguably not ideally) modeled using EntityRelationshipModeling and managed with a RelationalDatabase of conventional (i.e., table-oriented or relvar-oriented) design. That doesn't mean such systems shouldn't, and won't, eventually be replaced with something that facilitates fact-recording and fact-manipulation -- in a manner that equally supports conventional information systems and novel or typically non-relational applications -- but it does mean there's considerable inertia against it. -- DV
{Woe unto us and our limited tools that both create 'convention' and bind us to it. It'd be nice if someone would fix those... someone with time to fix them and the money to fight inertia, that is. I certainly agree that no existing implementation of a DataLog RDBMS would a good choice. And, yes, if we did suddenly change to thin tables, it wouldn't deprecate existing applications much - those could easily use views and an SQL translation layer. Whether this is 'fortunate' depends on whether you want people to start thinking differently vs. the benefits of slipping in behind the scenes without need for excessive transition costs or fanfare (celebrity death match: idealism vs. economics). Major advertising points would be on the advanced use of meta-data, improved security, and improved applicability towards advanced data-mining. -- db}
Patience... I'm working on it as fast as I can. As for implementing thin tables today, the applications wouldn't notice *much* difference -- except throughput could decline due to a greater volume of joins, and many apps may have to be re-written to get around various DBMS product limitations in updating through views. Yes, layers can be introduced that can mediate this somewhat, but a cost-benefit analysis on performing such an alteration to the company payroll system will rarely work out in favour of doing it. -- DV
RE: {Woe unto us and our limited tools that both create 'convention' and bind us to it. [...]}
Conventions are not necessarily bad. There are often multiple ways to do things and one choice over the other may have little if any difference in benefits. Thus, it becomes an internal standard and future things that follow that internal standard will be easier to pick up. In QwertySyndrome the benefits of familiarity are compared to the benefits of overhauling. Some argue that newer keyboards don't provide a significant enough advantage (for non-keying-athletes) to bother dumping the standard. --top
{Conventions are good insofar as they become de-facto 'standards' and standards are often good because they reduce systems-interfacing costs. OTOH, it is useful to remember that standards by convention are often extremely premature, and they produce a nice big wall to leap, or otherwise create baggage in the form of backwards-compatibility, when we finally possess the experience to know how to do 'it' (whatever 'it' is) better. It may be, however, that they are simply not something we can avoid; fundamentally, when pioneering a new field, need to start doing something before we can learn about how it can be done any better.}
{Consider QwertySyndrome - if someone came up with an objectively much better keyboard (don't mind the 'how' for the moment), would it be worth forcing children to use the Qwerty keyboard just to support 'familiarity' of the tiny percent of all users? And by 'tiny percent' I'm noting that children and children's children and so on would all be forced to use Qwerty just because, each time, that just happened to be what the previous generation was 'familiar' with. Or would it be better to create a transition plan, whereby the old fogies can continue doing as they have been, but convention isn't forced to become 'tradition' by the tyranny of a minority with power to make such decisions?}
{Logically and ethically speaking, MindOverhaulEconomics doesn't seem to be a valid reason to hand off our inefficiencies to our children then educate them to do the same. I believe we have some responsibility to fight convention and shift inertia, despite the 'immediate' inconvenience when, in those rare cases, we can see or find a better way to do things. To do anything else seems... childish and irresponsible, like a person who demands freedom but isn't willing to fight for it.}
- Technology changes so fast that I'm not sure worrying about 2 generations down the road is really applicable. By then they may be using cybernetic implants or AI will do all the work.
- {If conventionalists have their way, we'll be using Qwerty in two generations even if cybernetics came along - the keyboard would be more 'familiar'. Technology only changes because some people are willing to rail against convention when they see a better way, despite overhaul. Much of it is, of course, targeted at the younger market and the enthusiasts - those whose minds are being built, rather than those whose minds would need overhauled.}
- "Conventionalist"? That's not a reasonable assessment in my opinion. A given manager using standard FutureDiscounting sees no net advantage to moving away from QWERTY. A rational financial analysis will not likely support it for any given company. And cybernetic implants are a completely different technology.
- {Are you sure you caught the hypothetical? Even I certainly see no advantage of moving from QWERTY to any existing alternatives, but there at the top of this discussion was the phrase: "if someone came up with an objectively much better keyboard (don't mind the 'how' for the moment), would it be worth [...]". To which you responded with a negative. I am reminded of today's WTF: http://thedailywtf.com/Articles/How-Can-You-Expect-This!.aspx . . .}
- [I have had ideas for a better keyboard, using the thumb. Mostly, one thumb is wasted on desktop computers. Hard to access buttons could be tapped by the left thumb underneath the space bar. I think I discussed it in a page (ThumbKeys?? ThumbKey?? ThumbButtons?) a few years ago when I was very new and inexperienced with this wiki. The idea is that the left hand thumb (in my case) is rarely ever used.. so making use of it for buttons that are in hard to access places would reduce the amount of pinky and fourth finger strain. The problem with thumb buttons is that they may get in the way of the wrists - but laptops don't seem to bother us with the mouse buttons below the space bar so I think it could be done. Candidates might be the shift key, because my pinky gets sore. Humans, after several years, may develop and adapt more precise pinky fingers though - and we may become better typers over the centuries (as opposed to Simpsons characters, who had one finger chopped off over time and reduced to four fingers.).]
- {Heh. Let me know when fast touch-typing becomes a matter of genetic fitness ;-}
{It's worth noting that
LeanSixSigma, CMMI, and various business and manufacturing philosophies that have been successful seem to agree.}
I can't be sure, but most contributors to this page seem to be using the term "Formal Method" in a sense entirely different from that of EWD. I wonder how many of you have read much of EWD's writings to see what he means by FormalMethods, before then claiming (perhaps implicitly) to be writing about them. Perhaps I'm wrong, perhaps you have read his writings, perhaps you are using the term in the same sense, perhaps you have taken on board what he was advocating, but it really doesn't sound like it.
I would note that the topic title doesn't appear to limit the discussion to merely the article's definition. However, if there are different "kinds" and it makes a difference, perhaps we should give working names to those kinds.
{Speaking for myself, I know what FormalMethods are and I know I've entirely deviated from the article and essentially from the topic of this page. The discussion belongs on AbstractionsAreGood?, or something.} -- DV
{I haven't read many of EWD's papers, but I (via other vectors) understand FormalMethods broadly to describe any approach to design, coding, and verification thereof through formal (and preferably automated) mechanisms, including especially logic and mathematics. The goal is generally to both describe and prove desirable invariants (since that's really the only thing you can prove), though such invariants may be temporal in nature (e.g. "this function/variable never decreases"). Automated use of type-systems and formal abstractions are examples of FormalMethods at the software scale, and DesignByContract or HoareTriples would also be fine examples of approaches that lend well to FormalMethods. Where error and fault are to be tolerated or graceful degradation is to occur, stochastic proofs along with the operating assumptions will be necessary. In practice, progams written with FormalMethods oft possess 'negative' specifications ('X must not occur ever' vs. 'X must occur when Y'), which makes difficult any sort of myopic DoTheSimplestThingThatCouldPossiblyWork (since you need to do the simplest thing that can be demonstrated to not violate existing constraints), which makes it somewhat anti-ExtremeProgramming and earns FormalMethods a bit of a (not entirely undeserved) reputation as being rather cumbersome. If my understanding seems far off of what you believe to be EWD's understanding, feel free to explain to me the apparent differences, or point me to a paper where EWD makes a definition of 'FormalMethods' that he then sticks to. -- db}
But that doesn't distinguish between informal abstractions and formal ones.
{Informal abstractions are also good... when applied to informal purpose. What else do you think 'dog', 'cow', 'duck', and even 'animal' are? They are necessary for KnowLedge, which is (generally) lossy semantic compression of information which is necessary to deal with limited storage and processing capabilities in ALL computation systems (including our brains). Plus they are useful for communications via shared vocabulary with like-minded individuals, albeit at risk of confusion (since informal abstractions possess, by nature, no formal representation for purpose of communication, and often are defined ostensively which creates much opportunity for vagueness and ambiguity).}
{Any formal abstraction will be a formal predicate, which represents a proposition with variable-slots (which may then be instantiated by filling those slots, or otherwise described with universal or existential or other quantifiers). It doesn't always correspond to a 'type', of course; in computation-theory any formal pattern is also a formal abstraction (and vice versa, 'pattern' and 'abstraction' both being words with a rather broad scope and no formal distinction). The question in practical systems becomes whether the language supports a given abstraction or pattern or class thereof, or whether the programmer will need to develop his own buggy, slow, 80% solution to support it (see AreDesignPatternsMissingLanguageFeatures). -- db}
PageAnchor: "can-opener"
Moved story to AssumeCanOpener.
See also: TheoreticalRigorCantReplaceEmpiricalRigor, AgainstMethod, CustomBusinessApplicationDefinition (biz human nature)
AprilZeroEight
CategoryFormalMethods