Official Certified Double Blind Peer Reviewed Published Study

Slang for an intense and carefully-documented and reviewed study to "prove" that a given technology is "better" or that a given statement is true.

This "slang" is used exclusively by TopMind, and almost always used as a derogatory reference to the standards for evidence that science has set for itself.

That's an inaccurate statement. I usually use the term in reference to situations where somebody wants super-tight evidence in one situation but accept super-loose evidence in another, such as ArgumentByElegance. In other words, inconsistency or hypocrisy in evidence level requests.

It would be really nice if we had rigorous studies about contentious paradigms, techniques, tools, BUT, we usually don't. I'm just asking that the same standards and criticisms be applied to both sides. Just because I don't have an OfficialCertifiedDoubleBlindPeerReviewedPublishedStudy for my position does not dismiss your side from the same burden. Example:

It should be obvious what's wrong with the last statement.

(Disclaimer: Some of the above has been added to since the replies below.)

-- top

[FacePalm]

Please take something for that reoccurring headache of yours. I have no problem with "high standards" for science. This is one reason I support RaceTheDamnedCar.

[...Whilst spouting random idle speculation like, "I'm confident a document processor built with a RDBMS would score equal or higher on most representative numeric maintenance metrics thrown at it." Can you say "hypocrisy"? Good, I knew you could. How about that TableOrientedProgramming GoldenHammer HobbyHorse of yours? Show us sound metrics in favour of it. Maybe it's time for you to RaceTheDamnedCar, eh?]

Roughly 99% of this wiki lacks OfficialCertifiedDoubleBlindPeerReviewedPublishedStudy to back statements or suggestions of tool superiority. Why limit fussing about that fact to just me? If you are going to crucify me for not having OfficialCertifiedDoubleBlindPeerReviewedPublishedStudy, then to be even-handed, you'd have to nail everybody up there. You'll probably have to make the cross out of reinforced metal.

[You're the only participant who makes such prolonged and consistent efforts to criticise others' favoured techniques, paradigms, approaches, tools, languages, and so forth -- and typically with so little evidence or constructive basis. Therefore, it seems only fair to make you rigorously defend your HobbyHorses to the same degree that you demand of others.]

Also note that I believe WetWare is a large component of software maintainability, and each individual has different WetWare. A universal OfficialCertifiedDoubleBlindPeerReviewedPublishedStudy may suggest fits for a "generic" person, but says little about individuals. If a given shop tends to hire a certain personality type, which is common in my experience, then a general-population study could be of limited use. The tools that work best for visual thinkers may not for linguistical thinkers, and vice versa. -t

[Really? Please provide an OfficialCertifiedDoubleBlindPeerReviewedPublishedStudy with supporting evidence.]

FacePalm

[Uh huh. Furthermore, if you "believe WetWare is a large component of software maintainability, and each individual has different WetWare", why do you criticise individuals who express the nature of their WetWare by demonstrating preferences for, say, ObjectOriented programming or LispLanguage? Doesn't that seem a tad, y'know, hypocritical?]

I usually don't. It's when they imply there is some objective metric or universal truth of superiority that the fireworks starts. It usually comes in the form of, "If you were smart like me and you read all the wonderful obscure books I have, you'd just know X was better. You'd just feel it in your bones. No need for numeric metrics."

Once again, topper, you argue in bad faith. In talking about tools, languages, and techniques you use PersonalChoiceElevatedToMoralImperative. You choose to completely ignore the work of dedicated professionals whose whole careers have revolved around gathering the statistical data to support their positions. You pretend that the entire field of science involving engineering process as related to software development does not exist. Instead you continually throw this bullshit "wetware" argument at us as if that has some merit, despite having no support. And you wonder why it is that people are getting tired of you on this board. Hmm. Can't imagine why.

Sorry, but for the most part they do not use "statistical data". The contentious works are usually ArgumentByElegance. And I didn't say that the field doesn't exist, only that it's very immature. I didn't create topics such as DisciplineEnvy; others have agreed that certain kinds of studies are generally neglected either due to cost and/or due to being against the culture of research institutions. If you have numerical metrics (GoodMetricsProduceNumbers) that cover a sufficient number of useful aspects that show my pet tools are full of it, please present it. Stab me with numbers, not platitudes and insults. -- top

Why do we need to go through this argument again, top? As has already been endlessly pointed out before, Deming, Fagan, Crosby, the QFD folk, and lots of others have published their findings -- including statistical case studies to no end. You are simply ignoring the inconvenient truth that the data are already gathered and have been published, including the interpreted findings. Since you continue to keep your head buried deeply in the sand and speak from this ass-high position, you once again earn a TenSeven from the Wiki community. Oh, well.

If I remember correctly, those are only available for a fee. There is no use discussing those unless all readers and writers of wiki somehow have access. Readers and I have no way to verify their alleged greatness. Also, many don't appear tied that closely to source code issues. (Somewhere around here was a list and links I vaguely remember.) -t

See, this is exactly what I'm talkin' about. There are white papers covering this topic all over the internets if you care to look for them, but you can't even be bothered to do that. Why should we be bothered to discuss the matter when you choose to remain ignorant? Waste of time.

{I suspect you were aware of that conclusion before you engaged TopMind. So, perhaps you should seriously ask yourself that question, rather than TopMind.}

If you argue point X, it's YOUR job to find evidence of X. "The truth is out there somewhere on the intertubes" is not a valid debating practice. I'm not going out to find your saucer-driving unicorns. If you are too impatient to gather evidence for your own damned case, that is NOT my fault and shouldn't be dumped into my lap. Be an adult. Vacuum your own house or hire a bloody maid. - t

[While I agree with your last point (and I wish you would actually follow it when you make claims), the one who is arguing point X does not need to make evidence Y available. It is sufficient to give you instructions on how to find it. Any barriers to actually doing that are your problem.]

At least link to it, give a few paragraphs of overview, describe how it's relevant to topics here, maybe extract a sample or two, etc. "Just trust me" is insufficient to both me and readers. You at least owe a decent intro. -- top

[That's one way to give instructions, but it's not the only one. I have no intention of limiting myself to sources that allow links just so you can be lazy.]

Me be lazy? Mr. "I have solid proof that you are flat wrong and it's.......out there somewhere and it's your job to go on a David Carradine trek to find it." Try that on the jury; and then be careful not to drop the soap in the slammer. I don't know what kind of alien debate/discussion rules you are following, but please send that book back to Uranus with no return address.

[Yes, you be lazy. You proposed, as a minimum requirement, that the most work you have to do to get to the source was click a link. That's just you being lazy, especially since you don't follow that for your own claims. BTW, this isn't a court of law and they have different standards of evidence, so that analogy goes by the wayside.]

I don't know what you are talking about. Your whole approach here is just bizarre. You are strange. It's not a court of law, but you don't even come semi-close to the rules of courts and formal debates, or even informal debates. You are so far off from the ideal that it is just plain silly. Generally if you make a claim about X, then you are obligated to bring the evidence for X. "Here's google, you go make my case for me" is stupid and rude. -t

This page is devolving into Yet Another incidence of topper whining that nobody will bring him all the data that has been quoted for the last couple decades in CS magazines, published papers, presentations, all over the internets, and even on this board.

A previous contributor was correct; we've hammered on him sufficiently that any further discussion would be pretty pointless, eh? So, top once again earns his bad sef a TenSeven.

I believe your claim of ubiquity is highly exaggerated, like much of your claims in general. Select one such alleged study that you feel is the closest to CodeChangeImpactAnalysis and shows OOP or thin tables or FP or sets over bags (mixed bags/sets actually) being better, and follow GoodMetricsProduceNumbers, and I will spend up to $40 to purchase the publication (I'd need permission from wife for more) and examine it myself and produce a general description and some samples etc.; all the things that you lazy asses SHOULD have already done if using it to back your nebulas claims and call me every name in the book. -t

[I think you mean "nebulous". Here's one on sets vs. bags: http://portal.acm.org/citation.cfm?id=1142351.1142363&coll=ACM&dl=ACM&CFID=108084325&CFTOKEN=70445506 I'm sure your wife will be happy you've spent as little as $99 on something so vital for your personal and professional development as a year's access to the ACM digital library.]

[Note that the paper doesn't quite give the experimental-result numeric metrics you're looking for, but it doesn't need to because it goes one better than that -- it gives meta-numeric results. It proves that the time required to determine whether the result of one query (of a certain kind, called a "conjunctive query") will be a subset of the result of another (this is the "query containment problem", which is important for query optimisation) can be calculated algebraically (and determined algorithmically) for set-based systems but not for bag-based systems.]

I'm not considering machine speed/efficiency. I'm more interested in programmer productivity. I forgot to include that as a restriction above, and apologize (although have mentioned it multiple times in other related topics). I've already agreed that the field of "Computer Science" has done well in terms of machine speed/efficiency, and have puzzled over why it has struggled to carry over to people efficiency. -t

[Ah, you mean things like http://portal.acm.org/citation.cfm?id=1181775.1181777&coll=ACM&dl=ACM&CFID=108119944&CFTOKEN=46196628 By the way, the paper I cited above puts a big nail in bag's coffin, in terms of automated query optimisation, regardless of programmer productivity.]

[ComputerScience is the field that deals with machine speed/efficiency, or more accurately computational and algorithmic speed/efficiency. People efficiency is the subject of SoftwareEngineering, HCI and the like. Studying it is expensive and difficult, and sometimes subject to curious constraints. For example, the university where I work recently introduced a stringent approval process for studies involving people or other creatures, citing potential ethical issues as a motivation. The result is that to do any new research -- regardless whether you're an academic or a student -- you must complete a ten page application and get it approved by a university Ethics Committee. It's a pain to fill out, and speaking as a member of an Ethics Committee, it's a bloody pain to approve. However, if your research involves no human subjects, it's possible to get away with filling out only the first two pages of the application and approval is almost guaranteed. As a result, myself and the other three academics in my office have refused to conduct or supervise any research involving human subjects. So, ComputerScience projects are fine, but we're not doing or supervising any SoftwareEngineering research that involves human subjects. I suspect our experience is nowhere near unique.]

I realize SE studies may be harder, but it's also likely where better applicability lies because the low-hanging-fruit CS studies are mostly done. But you seem to be agreeing with my assessment that human efficiency (developer productivity) is largely ignored by academics, regardless of the reason.

I'm curious, why couldn't something like the study suggested in WhyNoChangeShootout be done? It's not as if you are locking the participants in a room. That wouldn't reflect the real world anyhow because developers can draw on just about any resource they want. Just ask them to code to the spec.

--top

[Largely ignored? Hell no. A number of my colleagues have built an entire career out of studying human productivity issues. You seem to be making a concerted effort to ignore SoftwareEngineering publications, perhaps because their existence does not fit your beliefs. I even cited one such study, above.]

[As for WhyNoChangeShootout, it contains its own criticisms. You might wish to read them. If you have a particular hankering for some research and it hasn't been done, I suggest you do it yourself.]

[Re "low-hanging-fruit CS studies are mostly done", that's pure foolishness unless you have some magical insight into what studies will (or will not) be done in future.]


CategoryRant


OctoberTen


EditText of this page (last edited December 17, 2010) or FindPage with title or text search