Theoretical Rigor Cant Replace Empirical Rigor

Theoretical rigor cannot replace empirical rigor. In other words, "the rocket must fly", not just merely possess an elegant design. They are both useful (if used right); but satisfying customer requirements, needs, or desires trumps theoretical pureness or thoroughness.

Another way to say it is, "elegant models don't guarantee elegant results".

Vice versa is also true. "The rocket must fly" for the right reasons. If you create a massive slingshot with which to launch the 'rocket', it really isn't a rocket.

If it works, the delivery mechanism may not matter to the customer.

Perhaps not. But if they asked only for a 'delivery mechanism', then even flight isn't required. Train, boat, teleporter, magic wand, etc. would do the job - supposing they work.

However, it may fail his/her requirement that it not damage the payload. Sometimes the customer indeed forgets to include important requirements, which makes for a fun day in court. The "fly" description above is merely a shortcut for "efficient", "safe", "accurate", "reliable", "cost effective", etc.

(I know I've given the rocket illustration before somewhere. If and when I find it, I'll try to refactor it all.)


NaturalSelection is probably the most empirical-driven process there is. The "blind watch-maker" of evolution did not give a flying bleep about elegant design. Survival and reproduction success were just about the only factors at play. Any "elegancy of design" is purely a side-effect of these two factors. However, this does not imply I am necessarily for evolution-driven design. It only illustrates that working and powerful designs can form from processes that have no elegancy analysis. --top

The relational model proposed by Codd was a theory on paper. Table oriented programming is heavily based on the theory of tabular organization of data. Theory and practice are not separate like you think. The relational model, a theory, was thought up both from practical experience of other models failing in the business, and from theoretical rigor. They are not separate. How can one even start to theorize about anything, without practicing thinking, by the way? In other words, even theorizing is practicing the mind (the more rigorously, the better). How can one possibly propose a theory that has no merit in practice? If one proposes a theory that has no merit in practice, it isn't useful - therefore they are not separate entities. I.e. the idea that one can't "replace" is ludicrous - since theory and practice are tied together, not replaceable. All good theories have use in practice. Hypocritically, table oriented programming is all in theory - not one complete table oriented programming development environment exists, nor does one TQL environment or language really exist. We'll leave Top with a quote:

You are putting words in my mouth. I am not saying they are mutually exclusive. They may indeed be related. But being related is not the same as being a substitute. A rocket designed with bad theory is less likely to be useful than a rocket with good theory. However, that does not mean that a rocket without solid theory is not useful and it does not mean that theory ALONE is evidence that rocket A is better than rocket B. Also note that it is possible to design a rocket using genetic algorithms etc. where the rocket design "theory" is not even known. My main point is that empirical evidence should still be the final determiner of "good". If theory helps that goal, then good for it.

In relational's case, IBM created pilot projects in the early and mid 70's and the people who used it like it. Larry Ellison (Oracle) got word of their enjoyment, and the rest is history. If they didn't like it, we perhaps wouldn't be talking about relational now. A lot of pilot projects based on interesting ideas fail to produce enough enthusiasm to change history. Thus, the empirical metric of "customer satisfaction" was key to relational being established in practice. (It is not a very dissect-able empirical metric, but one nevertheless.)

--top

A substitute means that you could replace one for the other. You make it sound like empirical rigor could be an excellent single measure of a product. But of course they are complements, not substitutes or replacements. The customer satisfaction of relational was a side effect of a good theory - although today relational is still debated by the Object Database and Tree Database people (who lack theoretical rigor!).

Example: Since tree storage does not have ample theory behind it, and since XML seems to be doing empirically excellent today.. would you say that XML is excellent since it has obvious empirical evidence that it is better for customers? Relational is more based on theory than it is empirical results and market satisfaction. EmpiricalRigorCantReplaceTheoreticalRigor. The market is indeed very happy with tree based XML, and if the market and customers are satisfied - hey who is to stop XML from taking over. It's what they want, and it is what the empirical evidence suggests - let's go for it? One of the other points is that table oriented programming has no empirical rigor - it is only a theory - so should we believe table oriented programming is useless until proven otherwise? Could table oriented programming even have been thought up without any theory? In other words, EmpiricalRigorCantReplaceTheoreticalRigor.

I won't make any claim that XML databases are "objectively worse". I would rather use relational DB's myself, but that may be merely a personal preference. If some people think better in XML, I cannot change that. Their code for handling XML navigation may be "messy" and "goofy" to me, but they may think its wonderful. I learned the hard way to avoid PersonalChoiceElevatedToMoralImperative. PsychologyMatters. If Martains think better using mass GoTo's, thats the way it is. --top

With this sort of attitude, we still would be using GOTO's and we wouldn't have pushed hard for WHILE/FOR/REPEAT loops, structures, and sane control of our programs. Would we be better off today, if we just let the GOTO psychology of all those GOTO programmers take lead? There were even emotional articles written at the time about how GOTO was so great, and that anyone who doesn't like GOTO is a weakling (RealMen?). These psychological and emotional states of mind lack rigor, which is what the page title mentions twice! Taking on an attitude like whatever works best doesn't imply rigor at all. This page should be called TheoreticalRigorCantReplaceEmpiricalThingsThatWorkOkay? or something less rigorous. You are not defending rigor, Top, you are defending psychology, personal preferences. You are taking on a very lax whatever works attitude. This is not rigor.

I'm not sure what your goto comparison is meant to show. Nobody ever proved nested blocks were objectively better. People adopted them largely because they liked them more than goto's, not because math proved them better. And those who resisted may have learned how to use goto's well out of years of experience such that other goto fans could read and modify each other's code. We have no hard data either way. I will not judge them in an absolute way without hard facts. This is not "lax", this is rationality. I've seen others read and edit code well that I thought was atrocious (not necessarily from goto's). How the hell they did this, I have no fricken idea. It's amazing how differently people think. You need to make sure you are not guilty of PersonalChoiceElevatedToMoralImperative. I don't see any evidence for that in your writing so far. --top

Gotos instead of while/for/etc. where the latter will serve are objectively worse under the microscope of PrincipleOfLeastPower. Unrestricted 'goto' also carries considerable security risk when allowed within mobile code because one can 'goto' places they aren't authorized to enter;

additionally, considering that programming errors can and will occur on just about any degree-of-freedom that is possible (MurphysLaw), errors with 'goto' have strictly greater potential to be difficult to isolate. These are deductive, absolute, logical truths.

These are also reasons that people "liked them more". But you do need to choose a microscope - an initial set of axioms or principles for what means 'better' or worse - otherwise you could say that "higher potential for errors implies better" and "slower implies better" and "greater security risk implies better". Every time TopMind starts waving his hands and flapping his jaw on this subject and using big bold words to claim how 'rational' he is being (as though his claiming it makes it true), I consider him somewhat irrational for his failure to do the rational thing: first develop and describe a set of axioms and principles as to what can reasonably (i.e. for good reason) be considered 'better'. TopMind must have such principles: in the discussion on ProgrammingLanguageNeutralGui, TopMind stated that HTML+AJAX aren't "good enough", but they are technically TuringComplete and capable of the necessary communications.

I am not sure what your point is. The implication is that I am a hypocrite. I may have "personal" axioms, but I don't claim them objective. ProgrammingIsInTheMind and every mind is different. I used to think that people processed programming thoughts similar to me, but learned I was flat wrong. As far as making claims about AJAX, it is more to exchange ideas about the topic than provide absolute or formal comparisons. It only becomes an issue when somebody claims or implies that technique X is objectively better than technique Y. When I say "better" in the AJAX topic, it is not meant as a claim of objectivity. --top

If you're going to make claims that something isn't "good enough" without any more basis than you complain about in other people, then you are, from definition, a hypocrite. Your hypocrisy is actually very well known on this wiki; you'd be better off just admitting to some hypocrisy than attempting to deny it.

I do flat-out deny it. I do not demand that everyone provide objective proof JUST for using the phrase "better". Rather it is of those people who claim their HobbyHorse is absolutely better. A typical dialog goes such:

  A: X is better.
  T: May I request evidence?
  A: [evidence given]
  T: That seems to depend on subjective assumptions, such as [...]
  A: No it doesn't. It is objectively true because [...]
  T: That is an "elegant theory" argument, not an empirical argument. 
Elegant theory is not sufficient.
  [typical design-versus-empirical debate ensues...]

I would only be a hypocrite if I demanded objective evidence when people merely used the word "better".

You're a hypocrite because you believe you have the right to say something is better and "not (explicitly) claim it objective", and you don't afford others the same assumption.

But it seems you're attempting to defend your hypocrisy with another of your gross character flaws: your tendency to use hand-waving defenses when ultimately challenged on your apparently baseless opinions. If you are going to claim things are 'better' without proper defense, you can't demand more from others without being a hypocrite AND your opinions have no place on this wiki - they should be deleted. And in these "typical dialogs", it seems you usually are often mistaken about what constitutes a "subjective assumption" (absence of universal agreement doesn't make an assumption 'subjective').

And 'ProgrammingIsInTheMind' isn't a valuable contribution here - it does nothing to aide in analysis of whether a particular solution is correct or incorrect or better or worse. Mathematics and models are also "in the mind" in the same sense as programming; it is well known that all minds aren't equally 'good' at mathematics, so to imply that all minds have equally 'good' solutions in programming seems utterly fallacious.

With regards to the "I don't claim them objective" - principles and axioms don't need to be universally agreed upon to be 'objective'; they only need to be capable of analysis independent of your (or the observing individual's) personal state-of-mind - so if your claim has more weight than "AJAX+HTML aren't good enough because I don't believe they are good enough", your principles are probably objective whether you bother to claim them so or not. Even "better because it leads to longer compile-times which means more time for me to goof off" is objective. OTOH, I wouldn't be at all surprised if you were just waving your hands and flapping your jaw and making claims that things aren't "good enough" with zero weight behind those words (AdVerecundiam from top = zero weight) - it would neatly fit my mental model of you.

I will agree that if both parties agree on root idioms, then the results are "objective" based on derivations based on those roots. However, in practice such agreement is rarely the case. As far as the worth of AdVerecundiam, if you don't want to hear my opinions, then simply ignore them. I'm not forcing you to like my opinion. But I do have a right to state it as much as any wiki participant. (Related: EvidenceTotemPole).

Objectivity is independent of 'agreement' on root idioms. And this wiki isn't supposed to be a place for faith and fallacy or unsubstantiated opinion. If you can't or aren't willing to at least attempt to support an opinion with objective and cogent reasoning, it should be deleted. That goes for you as much as any wiki participant.

If software engineering is mostly about psychology and you wish to avoid talking about psychology due to some rule for wiki you envision, then we'd have to delete roughly 95% of this wiki. Think about it. Type theory, relational theory, etc. all depend on base idioms, assumptions, and or notations that are not necessarily properly tied to physical reality. They are based on simplified models of reality. --top

Objectivity doesn't rely on being "tied to reality", either. It seems you STILL don't comprehend the proper distinction between 'objective' and 'subjective'. A principle or axiom or idiom is 'subjective' only if its application depends upon the state of mind or interpretations of an otherwise omniscient observer of the axiom or principle. All proper mathematics axioms are objective even when they don't have any basis in reality, and mathematical predicates are objective even when they are not 'decidable'.

They are only "objective" within their made-up little universe. No. "Objective" is not relative to a made-up little universe even if you claim it to be so. I can't think of any defined properties that suddenly become 'subjective' when applied outside of the universe for which they were developed, except perhaps when talking about 'strange' or 'excited' or 'happy' or 'angry' particles and similar cases where the words have different meanings due to change in context.

It just so happens that we prefer to select objective axioms that are 'useful' to us through their being tied in some reasoned way to reality. But the objectivity of any predicate or measurement is independent of its utility and, further, independent of its applicability given limited cognizance and sensory perception. Principles and axioms for what is 'better' that are actually useful, of course, must be tied to reality via some mechanism that can infer, deduce, or (in rare cases) directly measure a property - but even here 'measurement' isn't good by itself: if I were measuring how much time something adds to the compile on the principle that longer compiles are better, that doesn't do much for someone who disagrees with that principle in the first place.

And your strange hypothetical argument doesn't strike me as very cogent. If, as you wave your hands and hypothesize, software engineering were mostly about psychology, then it wouldn't have much at all to do with the problems being solved or the constraints on how one can go about solving them - i.e. you could solve 95% of every software engineering problem without looking at the problem, the environment, etc. Software engineering, however, is not mostly about psychology; it is mostly about ways of correctly and efficiently solving problems under known or reasonably assumed constraints (such as the requirement to be provably correct, or a cost increase by an order-of-magnitude to make changes after deployment, or the need to meet a particular schedule given a certain set of human resources, or the need to scale up to one million concurrent users).

Most of the cost of software engineering is about maintenance of the code, not about "provably correct" output. At least in my domain. If provably correct was economical, then businesses might buy into your pet contraptions. Until then, your pet contractions are a solution looking for a problem. Face reality dude, and realize you are out of touch. --top

If you need to do much maintenance, the costs will be there - fixing something in maintenance is (according to mounds of evidence collected by the CMMI group) about 10x as fixing it during testing, which is again about 10x as expensive as fixing it in design and coding, which is a couple times again as expensive as fixing it during requirements analysis... and 40% of bugs (again, collected by the CMMI group) can be traced to design and requirements analysis. It's nice to know your approach (create fast, fix, fix, fix) focuses on milking the most money possible out of a business.

You go on and on about CodeChangeImpactAnalysis because you spend most of your time fixing stuff that wasn't working to start with - your domain as you see it, your approach putting the costs into maintenance. Perhaps your hands are somewhat tied by shifting requirements, but then you can focus on making it cheap to maintain or adjust the behavior - increasing flexibility. Getting it right or flexible the first time are the cheapest ways to write software. These things rely on theory, and a great many businesses ARE buying into that truth.

I'm not sure I agree with this. Sometimes people just know they need something, but are not sure of the requirements or how to describe them. They are domain specialists, not systems analysts. And, often you find stuff during the programming step that is hard to anticipate during the design phase. When forced to define task details and then test them, you sometimes discover issues that could not easily be found merely by thinking about them. (Related: BigDesignUpFront.)

Further, deadlines are deadlines. If they don't give you enough time for thorough analysis, then that's just the environment one has to work with. Politics happen. This is another example of you ignoring the nature of the real world. If your theory assumes an ideal world, it will flop.

Further, the finance *theory* of FutureDiscounting suggests that short turnaround times are objectively better. I tend to somewhat disagree with FD for software engineering because "infrastructure" needs more thought than say store products in my opinion. But I cannot justify this reasoning in a very clear way so far. It is a gut feeling. FD (finance theory) may be a case where theory bites YOU this time. How ironic. Be careful which weapon you choose because it may be just used against you. --Top

You make '*theory*' bold as though theorists believe theories and models merely by virtue of them being theories and models. That idea has been contradicted directly by my words and those of others, and it's such a stupid concept that even you should know better than to believe or assume it. Nice hand-waving straw-man approach to argument, though. Even the "if your theory assumes an ideal world" bit - just waving your hands, spouting hypotheticals, and setting up straw men to burn down.

If there is a theory vetting process you can recommend, please state it. Finance theory is well-accepted in the business world, I would note. One vetting process I mention in BookStop, PageAnchor Vetting, is to use empirical studies to test them. In other words, it helps if both kinds of evidence reinforce each other. --top

I have stated it often enough, but obviously your memory is as selective as your arguments are fallacious. Theory vetting process: a theory must make falsifiable predictions, a theory must pass OccamsRazor, a theory must not be falsified within the domain to which it is being applied (which automatically requires internal consistency - inconsistent theories falsify themselves even without observations), and a better-tested theory (one with more data-points that has not been falsified) is preferable to speculation. Theories don't need to make useful predictions to be good... only to be useful.

But like I already said, how to apply these to software engineering is a messy art.

Yes, you keep claiming that. Where's your proof? I see is a lot of decent and straightforward application of the above in every domain I've ever worked in. I feel you're waving your hands and inventing a problem where none exists, and your only support for doing this is that you seem to think that all forms of science must involve numerical metrics - a notion most often defended with much waving of hands.

You are talking anecdotes. You cannot claim that something is objectively *net* better and then ONLY defend it with anecdotes or focus only on a single factor.

I'm not talking anecdotes.

And "useful" is also relative.

Absolutely. GoedelsIncompletenessTheorem ain't all that useful for screwing in a lightbulb or even predicting when it will burn out. One of the neat things about a theory is that utility can be determined largely independently of figuring out whether the theory is invalid (albeit, both utility and validity must exist within the domain of the theory).

For example "provably correct" programing may be useful if it were the only criteria, but if it costs 10 times as much as the alternative, many would not consider it "useful" to them. (I know you will disagree with the "10x" figure, so just consider it hypothetical at this point to avoid re-entering that issue.)

Now you're considering something entirely different. Utility != Economy. Provable correctness is still "useful" even if it costs 10x as much. It wouldn't be unreasonable to say that provable correctness is always 'good' if one can achieve it - i.e. if it were always free, you'd always take it. Of course, it isn't free and its total value and its cost both depend on the domain of application.

The implication was that its usage (and heavy typing) was objectively better OVERALL and that those that don't use it are lazy and unprofessional.

Provable correctness is always better (OVERALL). Costs are always worse (OVERALL). Sometimes there's a tradeoff: costs for correctness. Sometimes there is not. You regularly exaggerate the costs of correctness, doing much handwaving and tossing out hypothetical numbers like '10x'. That behavior certainly is lazy and unprofessional.

Of course we can boost specific metrics with specific techniques, but good software is a symphony, not a single golden violin.

[Whoever keeps deleting my replies without asking is being rude. Please stop or I shall consider retaliation. --top]

You keep injecting comments of no value at all (like "where did I say that!" when I didn't imply you said it, or "this doesn't contradict anything I said!" when I wasn't trying to contradict anything you said - just a fool's flailing, looking for something to fight). That is quite rude, too. Please stop, or I shall continue deleting them.

Should I return the favor and delete your comments which are "no value at all"? If you added information that is not related to something I said, which is what you claim here, then it could be considered "no value" if not related to the topic. "milking the most money possible out of a business" is out of the blue and off-topic and should be deleted by your criteria. If I merely point out that I didn't say something for clarity sake, then it still serves the purpose of clarification.

The way you worded it *implied* it is something I said because you used the word "your". What other purpose could "your" serve there? Defending against a probable implication that I try to rip businesses off is a WORTHY complaint, it is NOT "flailing, looking for something to fight", it is a brash and ugly accusation. I wanted to make clear that it was not something I said. To be on the safe side, I generally do not outright delete other people's comments. I find it highly rude and risky. I hope you return the favor. If you really feel strong to delete something, ASK the author along with the reason why. --top

My 'words' only had implication as to your actual behavior - because I don't believe the behavior you profess to and the behavior you actually perform are at all identical (i.e. I consider you a first-class hypocrite). So to complain "where did I say that?!" is completely irrelevant, and, regardless of what you inferred, neither I nor my words implied you ever said it. And anyone who reads looking for 'contradictions' then gets upset enough to interject a comment when they fail to find one is most likely a fool and is almost certainly spoiling for a fight - I believe you're no exception. A wiser person would accept partial agreements for what they are and examine it in the context of the larger argument rather than break the argument into tiny little chunks as though every single piece needs a reply.

Your justification here is indirect and round-a-bout (like most of your "logic", OccamsRazor my ass). The accusative interpretation I provided is very possible given the way it was actually worded. I believe a majority of a jury would agree with me. I'd bet money on that.

You infer. I imply. You don't determine my implication via your interpretation, accusative or otherwise; you can only attempt to figure it out. I believe a jury of English experts would agree with me. I'd bet money on it.

You perhaps cannot see that because you are strange and think different from normal people and are offended in a non-normal way. Anyhow, I have a right to make it clear that something does not represent my opinion and prevent possible misinterpretations. If you disagree, then we'll just have to have nasty EditWar's when you misbehave like that and we'll never get to interesting arguments, instead ArgueAboutArguing? all the time. If you don't like the interjection style, then say so without deleting first. I'll try to use a UseNet-like style where the quotes are repeated. Personally, I find it bad OnceAndOnlyOnce, but am willing to compromise. --top


(moved below)

We've been over this already when we did our fractal never-ending LaynesLaw fight over "usable objectivity." The practical issue is tying such models to reality, which is usually the sticky part. This brings up right back to the crux of THIS topic: elegancy within the model versus elegancy for reality. AKA, the "ivory tower" fight.

Tying a model to reality isn't difficult. Models make predictions. Scientific models make predictions that have potential for falsification. If a scientific model is applied and its prediction is not falsified, it counts as a datapoint in favor of the model. Ivory tower guys win because their models are tested with every observed application. They choose elegant models because they aren't worse and are simultaneously easier to use - OccamsRazor - from which one can infer that reality itself is largely elegant (albeit admittedly less so when it comes to human policy). The only things to watch out for are models that can't be falsified, and models that have been falsified, and pure speculation - i.e. unsubstantiated opinions that haven't ever been tested. Even ivory tower guys don't like pure speculation (except when brainstorming).

I disagree that being an ivory tower model automatically makes it "better". There are good ivory tower models and bad ivory tower models. The best way to tell which is which is to empirically test them. QED. --top

Since I never implied that an 'ivory tower model' is 'automatically' better, it seems inappropriate for you to 'disagree'. Regarding your "best way to tell which is which", you're wrong (unless you think it is "better" to waste tons of money to test models). The cheapest ways to test models are with models of models (e.g. searching for inconsistencies) and principles (e.g. OccamsRazor relative to existing models) and seeking known contradictions with past recorded observations. Empirical testing should happen only if a model passes all the cheap test criterion. If you start with empirical testing, you'll just waste a lot of money testing models that most likely won't prove useful.

Physical models are easier to do semi-realistic tests than programming-related stuff. This is because ProgrammingIsInTheMind, and we don't have a good mind models (yet). This is where the rocket analogy tends to break down.

It is more likely "because" programs are more complicated than most physical things. As far as your "we don't have good enough mind models (yet)" can you prove the existing mind models aren't 'good', or are you just blowing more hot air, waving your hands, and declaring it to be true? Or is it because you, who thumbs his nose at academia, have been actively avoiding learning mind models and the observations and logic that accompanied their construction, such that you are ignorant and incapable of judging any 'good'. I, personally, consider the ActorModel, the ChomskyHierarchy, the RelationalModel, many of the AI models of axioms and fact databases (e.g. predicate calculi & logics), design by contract, CapabilityMaturityModelIntegration, Workflow models, SecurityModels such as CapabilitySecurityModel, etc. to be 'good' models. Can you even present what you (in your typical hand-waving, hot-air, speculative and hypothetical fashion) would consider a strong case to convince me that each of these are not good models?

Programming is not for the mind - it is for the customer. Programming, is obviously done in the mind - because it is done by humans. That is obvious. What's important is to model the end application on what the customer needs (hopefully not just purely what they want, although that helps, sadly). You can ask a console zealot to build an application for the typical customer, and he'll talk about how his mind prefers the console and that's the way he's doing the application because he loves the console and that is in his mind - but if the customer doesn't want a console, then "programming in the mind" and "programmer psychology" doesn't help one bit. Maybe the customer wanted a GUI. If you are lucky enough to find a market where all your customers want exactly what you psychologically want, then you are lucky. For example, imagine a customer that requests a bunch of crud screens, and nothing else.

Further, often the "inconsistencies" of the domain itself overshadow any theoretical purity inconsistencies. In other words, the "noise" from the domain itself (or in one's understanding of the domain) may overshadow any problems caused by lack of theoretical purity. Idealists have a habit of ignoring this. Its sort of a form of SovietShoeFactoryPrinciple. They focus so heavily on preventing ANY detectable theoretical inconsistencies that they fail to consider how well it deals with a messy or imperfect environment.

It's roughly comparable to an audiophile purchasing a $5,000 top-of-the-line car stereo system, but installing it in a Chevy Malibu such that the noise from the car overwhelms the purity gained from getting the top-of-the-line stereo. The audiophile is purchasing purity for purity's sake: for the mere feeling of "having the best". Some of the wiki zealots around here are the software design equivalent, with the same kind of snooty arrogance. The cheaper stereo is easier to install, saves money, easier to operate, and is virtually indistinguishable from the expensive one when going 55 MPH on Old Hill Road in the 1998 Malibu. A decent empiricist has a better grasp of reality while the audiophile waves around the frequency graphs from the German lab test.

The materialistic, product oriented, physically obsessed person will buy the $5,000 dollar stereo, while the person who's done research and theorized will understand that once the stereo is above a certain wattage, these watts won't have much use. Most of the time the volume is set on medium or low anyway. The non theoretical person obsessed with products, peer pressure, etc will buy the 800 watt stereo and never use more than 120 watts in reality. Hood insulation and interior insulation will help your Malibu fend off engine and road noise. Additionally, and evidentially, theoretical smart pHd professor grade people, or intelligent programmers are not too often seen driving around in boom boxes that have stereos worth $5,000. Rather these snobs you speak of listen to lighter classical music, rock music that isn't "death metal", etc. In fact it is the physically obsessed unintelligent teenager or "twenty year old" crowd (especially Rap fans) that purchase these five thousand dollar stereos. This humorous stereotype is coming from someone in their twenties - so don't assume it is a biased comment coming from some old man with a PhD looking down on teens and twenty year olds.

Perhaps a more relevant analogy for this audience is to buy triple-error-correcting-with-wide-parity RAM when running Microsoft Windows. The errors from Windows will swamp by far any problems caused by cheaper RAM, from a statistical perspective.

--top

Perhaps analogies usually fail and can be shot down fairly easily. It's hard to equate RAM and Microsoft Windows with theoretical or empirical rigor. Besides, if the academic was using such pure technology as this RAM you speak of, he'd probably not be using Microsoft - he'd be using an academic operating system such as Minix, Oberon, or something more secure or robust like OpenBsd. Society is far from pure though - we see all sorts of ranges of people who use all sorts of impure tools throughout the day. Several doctors eat junk food and smoke - yet they are very interested in helping people's health at the hospital. There are clashes.


Nature under heavy empirical pressure often skips ivory tower models also, as in RelationalEvolutionPuzzle. -- top

I can almost see you waving your hands in the background as you make that claim. Let me know when you have evidence that nature is "skipping" models that isn't predicted by a simpler or better tested theory.

Please clarify. What "simpler or better test theory" are you talking about?

Damn this is frustrating - you should already know this stuff.

Any simpler or better tested theory will do. If you have evidence, it doesn't help you promote your new theories (like "nature skips models") unless it doesn't fit existing models - that's life as a scientist. Additionally, you must pass OccamsRazor - so any simpler theory will do even if it doesn't exist yet. An example simple AND better-tested theory is that involving local maxima preventing most evolutionary divergence - i.e. abductively, we can infer that associative memory neural networks (what we have today) are already a local maxima or we would see a much wider range of brain structures in different animals.

Well, that's one theory for the RelationalEvolutionPuzzle. I'll leave it at that because its already discussed in that topic.

But you should get evidence before you worry about simpler or better-tested theories, and I very seriously doubt you have even that. I've never seen TopMind apply theoretical OR empirical rigor. He spends his time, as above, waving his hands, blowing hot air, speculating, hypothesizing,... and all the while complaining about how other people aren't being 'rigorous'.

Perhaps because I value it less than empirical or psychological factors.

I very clearly said I've never seen you "apply theoretical OR empirical rigor" which means, to my observations, you apply neither. Replying that you value it less than empirical factors makes no sense whatsoever.

Theorists are often guilty of the 3 things listed above, including SovietShoeFactoryPrinciple.

Empiricists are the only ones that are even possibly guilty of the SovietShoeFactoryPrinciple, as it involves playing to numbers when real measurements are taken (a sort of HeisenBug in measurements). Theorists are free to take it into account as part of the theory (it is common economic theory that people act based on the motivators). And I've no doubt that there are theorists who are often guilty of anything you might list, but I've equally no doubt that the same applies to the empiricists - they're all people.

You seem to want me to counter your theories with theories of my own. But I am outright complaining that theory is not of much use so far in software engineering. You are creating an artificial obligation for me.

I believe that theories have already proven themselves in software engineering. So does anyone who practices on the PortlandPatternRepository. We find evidence for the theories we support repeatedly in our own work and in observing the work of others - not strong absolute laboratory evidence, true, but mountains of evidence nonetheless. You don't need to measure statistically how many people get killed if they walk blindly across streets to 'prove' the theory that walking across streets blindly is a stupid thing to do. Theory has already proven to produce good real-world results: it was proven to me when I moved from low-level languages to high-level languages. It was proven to me again when I started dealing with security concerns and migrating code. You've got BurdenOfProof to challenge existing beliefs and existing theories. That is a matter of fact. Complaining about "artificial obligations" just proves how out of touch with reality you really are.

When theory proves that it produces significantly better real-world results (more productivity, more profits, etc.), and we find ways to distinguish good theories from bad/impractical theories, then theory will have credit. Right now it doesn't. Being mentally "elegant" is just plain insufficient. That is NOT "hand-waving", it is expecting that a technique for finding practical solutions be proven first. --top

Mental "elegance" was never proof of a theory or model. It's simply a selector for a theory. If two models exist that make the same predictions, choose the more elegant one: OccamsRazor. So stop waving your hands and pretending theorists are using 'elegance' as proof. How many more straw men will you burn in your fallacious arguments?

What is "make predictions" in terms of software engineering? Being able to reason about the outcome of some programming or query code? "Reason about" is a psychological phenomena.

Ah, you'll burn at least three more straw men. Software engineering is about efficiently building a software product that meets a set of requirements while operating under a given set of constraints. Software engineering models and theories, therefore, are those that "make predictions" as to how certain approaches will meet requirements while working under constraints. Within software, theories and patterns "make predictions" as to the successes and problems you'll encounter tomorrow based on the decisions you're making today. You could probably measure it empirically, but you'll need to wait until you've failed or succeeded... which could be an expensive proposition!

This is incomplete. "Building" is only part of the issue. Maintenance is often more expensive than building. Second, "meets a set of requirements" is not quite the issue when the requirements are fuzzy, which is often in the case in projects I work on. If the requirements were documented well up-front, many companies would just ship the job off to a low-wage country.

I suppose you can attempt to fuzzily divide 'maintenance' from 'building', but maintenance is (in every important sense) just late cycle development. And while taking software product A and making source-changes C is often an efficient mechanism of building software product B (and therefore one that is often favored by software engineering practices), there is fundamentally nothing stopping you from starting from scratch on every single software product you build (even for a slight change in requirements). Anyhow, there is little need to sputter about requirements issues - the fact that requirements may shift and aren't always entirely clear is well known in software engineering, and most software engineering models accept as axioms that requirements will shift some and require clarification. It is simply taken into account - one more normal part of building the software product. Most software engineering methodologies even have dedicated phases for requirements clarification and incremental extensions (spiral model, user stories, etc.).


Three Types of Evidence and Their Pros/Cons

Theoretical Evidence

Empirical Evidence Personal Psychology (Comfort) Evidence --top

You got backwards the pros and cons for both Theoretical and Empirical evidence. Theoretical evidence (by which you must be referring to properties inferred inductively, deductively, and abductively by use of models or theories) is usually easy to relate to the real world (you wouldn't bother inferring properties that you don't much care about), and are often quite close to 'customer-side' concerns. The 'con' would be that they're difficult to test or measure directly - generally because things that can be measured directly people, very simply, measure directly rather than infer. Empirical evidence refers to data measured correctly. The 'pro' is that the measurement isn't theoretical (though it may still be in error). The con is that is is rarely close to "customer-side" metrics, because it rarely possible and sufficiently cheap to measure the exact things the customer cares about - one generally ends up measuring what is easy rather than what is right. When it is possible to easily measure the right thing, of course, one should do so.


Bickery Stuff Continued

Re: If you want to be treated intelligently, ask what you mean and don't pick on correct English (I'll gladly fix incorrect English).

Being correct is not sufficient anymore than a BrainFsck program producing correct output is acceptable just because it produces correct output.

Welcome to the Internet. No one here likes you. (http://www.beska.net/welcome/ - especially note the part starting with "How dare you!")

I don't demand you act intelligently, only that you act honestly (take care to say and ask what you mean) doing your best to be correct (you're responsible for all your own homework, all your own footwork, and all your BurdenOfProof for any claim you wish to make) and avoid acting foolishly hostile (i.e. don't pick on something if you can't prove it incorrect, don't use fallacy in arguments) if you wish to be treated with kindness. There is no need to treat kindly those of whom I'd prefer to be rid, so I won't treat kindly on this wiki those who regularly offer the impression of dishonesty or fallacy or vociferous ignorance.

Re: You didn't ask for a clarification of the sentence, and you didn't indicate you had difficulty with it.

I reread it and to me its clear I did. However, I don't wish to bicker about that today. Nor natural selection today anymore.

Maybe you can explain with acceptable reasoning how one gets: "I'm confused, do you mean to say the evidence or nature or model is being predicted?" out of "What 'simpler or better test theory' are you talking about?". I'll even apologize if you do. OTOH, maybe you can't and you're just droning on to defend your esteem.


(copied from above for comment embedding)

It is more likely "because" programs are more complicated than most physical things.

As far as your "we don't have good enough mind models (yet)" can you prove the existing mind models aren't 'good', or are you just blowing more hot air, waving your hands, and declaring it to be true?

Or is it because you, who thumbs his nose at academia, have been actively avoiding learning mind models and the observations and logic that accompanied their construction, such that you are ignorant and incapable of judging any 'good'.

I, personally, consider the ActorModel, the ChomskyHierarchy, the RelationalModel, many of the AI models of axioms and fact databases (e.g. predicate calculi & logics), design by contract, CapabilityMaturityModelIntegration, Workflow models, SecurityModels? such as CapabilitySecurityModel, etc. to be 'good' models.

Can you even present what you (in your typical hand-waving, hot-air, speculative and hypothetical fashion) would consider a strong case to convince me that each of these are not good models?


See Also: BookStop, IvoryTower

MayZeroEight


EditText of this page (last edited June 20, 2008) or FindPage with title or text search