Clean Room Software Engineering Is Not Dead

I'm doing a research work on CleanroomSoftwareEngineering, as part of the requisites for my Master's Degree in Software Engineering.

It seems to me that - unfortunately, I.M.H.O. - Cleanroom Software Engineering is dead or forgotten - is that too much to say?. The most recent published papers and articles on the subject are dated 1998 or so. Beyond that, it seems that both the academic and professional communities lost their interest in Cleanroom. Indeed, the IBM CSTC is no longer active, as other CleanRoom-related companies such as SET.

--WalterArielRisi? on news:comp.software-eng

Cleanroom suffers from a singular problem...the base premise is that software can be created more reliably and cost effectively without the use of a computer than with one.

Now...why is that a problem? Reducing it to its base presumption:

A given problem which is routinely solved with the aid of a computer can be solved more reliably and cost effectively without a computer.

So...why are they creating software at all? Wouldn't the same principle apply to the solutions they are automating? Programming tools are some of the oldest forms of software that exist. They are used daily by software professionals who are in a position to refine and improve them.

If we cannot make software usefully assist us...how dare we presume we can make software - by any method - which can reliably assist others?

In other words...those who preach CleanRoom are either liars or frauds. Either they don't believe what they are preaching...or if they do, they are doing a deliberate disservice to the world by creating more useless (by their definition) software.

FWIW - IMO --JohnElrick?

I'm trying to be as impartial as I can be in this CSE subject...but of course I feel somehow interested in CSE and its potential, and that's why I proposed this work to my mentor...

I think that the affirmation that CSE says that software created without a computer's help is better than software created without it should be taken with care. There are things that mainstream development environments simply can't do, such as demonstrating that a program is correct.

A JavaLanguage compiler can check for the syntax of the constructions but don't ask for much more (actually, Java is a notable improvement in that it requires DefiniteAssignment? of local variables and some other clever things). Say you have a (real) strong-typed language such as Haskell (a pure functional language) and I believe that the compiler does much of the work you need towards correctness. But that's generally not true for Java, C, Cpp or other mainstream languages. Still, the Haskell compiler cannot check for everything...!

I think that CSE tries to capitalize on what engineers can do and machines cannot. Of course, the compiler-access restriction is too much, but that does not imply that program verification by human engineers must be sacrificed. I think that a good balance can be achieved just using CSE with some common sense (the same is true for other methodologies).

By the way, I would not call CSE's advocates fraud just because they prefer to rely on human skills than on technology. On the other hand, XP advocates also do so, and they propose even more extremist premises! Should we go to the extreme of calling them frauds too?

Cheers, --Walter.

I used the term "fraud", not because they prefer to rely on human skills...I termed it "fraud" because of the base presumption. My interpretation is that CSE's base presumption is:

"A human with pencil and paper can produce better results than he/she will with the aid of a computer."

If they truly believe this...the only justification they would have for producing software is to "fleece the sheep". Now, if the focus of CSE was to rectify this problem...use the methodology to produce reliable tools for software development...I can see a greater purpose. However, I do not see this being the case. CSE is proposed as a general solution to software development.

As I said, Walter...if we do not have the confidence that we can produce tools that will assist us in building better sofware, how dare we presume that we can build tools that will aid an accountant.

Please understand that I don't normally dismiss ideas out of hand. However, when I see a proposal that begins with what I perceive to be a very flawed basis, it's difficult for me to accept it as valid.

--John

[Google for sample text to read the rest; it's a good thread]

Peter Amey wrote:

...Using SPARK and the SPARK Examiner we always do this analysis before there is any attempt to compile and dynamically test. The process is very effective. We half-jokingly call our use of the Examiner "pair wise programming"; it's just that the second member of the pair is a tool not a person.

''Again, in the context of SPARK, we find unit test very ineffective (see "IsProofMoreCostEffectiveThanTesting?" paper)

Your process is working...

...However, that paper is written in the style of the journal article for a scientific experiment, but it's not. There's no control group.

Their conclusion, "SPARK found our bugs, upstream from the unit tests," is not the same as the conclusion "SPARK is more effective than unit tests without SPARK", which in turn is not the same as the conclusion "SPARK is more effective than TestFirst".

YMMV.

--Phlip

I have indeed contacted several of the people involved in CleanRoom in the last years (among which there are some people that worked with HarlanMills himself, wrote CleanRoom books and lead CleanRoom projects). Fortunately, all of them have been very collaborative in sharing their thoughts!

As [someone] mentioned in one of your posts, the generalized idea is that CSE is in fact not dead! Maybe the "boxed" form of CSE is, but the practices are in fact alive and well. I've even contacted a person from a company in Finland (see a previous post on this thread) and received very positive feedback!

As several people mentioned, most CSE techniques are common sense. The critics most often focus on the "canonical" aspects of CSE, such as no-compiler-access; most modern CSE advocates accept that this is no longer a good idea. Maybe it was in the early times of CSE - mid 70s - where there were no IDEs such as those we have today, and programmers worked in dumb terminals... (there's a very interesting article by SteveMcConnell that compares programming in PCs vs programming in mainframes. Not CSE related, but anyway interesting to understand a bit more the original context in which probably the no-compiler-access restriction was formulated).

Regarding the strong criticism about the effectiveness of reviews or proofs vs unit-testing, I see that most critics say that the experiments cannot prove that proofs are better that unit-testing... but do they prove that it is in fact otherwise? Experiments' limitations such as the Hawthorne effect may not prove that proofs are better than unit-tests, but in that case, neither to they refute the fact.

Anyway, it seems that non-validity of experiments is appealed only when there is a strong intention to attack a - less fashionable? - method such as CSE. I haven't seen reports on controlled experiments in using RationalUnifiedProcess or ExtremeProgramming, but anyway they're being heavily promoted! Don't say they don't work - I indeed believe that XP and RUP both have interesting aspects! -, but that I just haven't seen experiments proving them better than CSE, TSP, OPEN or any other boxed methodology!

Good luck! --Walter.


I think the claim that "a human with pencil and paper can produce better results than he/she will with the aid of a computer" is a gross mischaracterization of the CleanRoomMethodology. Clean room programmers use text editors, compilers (perhaps not interactively, if they're particularly anal), linkers, loaders, operating systems, statistical analysis packages, databases, version control systems (one hopes!), and, if they're clever, a LiterateProgramming tool like NoWeb. The only tool that seems obviously forbidden is the interactive debugger -- which can be a pretty lame way to program, akin to poking it with a stick to see what makes it flinch.

One big problem with the application of software to most problems has been a rush to automate without actually understanding the problem domain or what software is best suited to it. Landauer's TheTroubleWithComputers has some great graphs on this, showing how most industries have seen little productivity gains for their software investment. Telephony is a notable exception to this trend, which Landauer attributes to careful evaluation of technology, as opposed to willy-nilly featuritism. CleanRoom might be looked at as a similar way of using software to improve the productivity of software developers.

 -- BillTrost, wannabe clean-room programmer


To say that CleanroomSoftwareEngineering is based on the assumption that "a human with pencil and paper can produce better results than he/she will with the aid of a computer" is a bold claim. And clearly untrue.

It seems that CleanroomSoftwareEngineering practice is based on the idea that running a finished program to observe its behavior is the least effective way to ensure that it is correct. And there seems to be an assumption that testing would not normally be automated.

So it doesn't seem that CleanroomSoftwareEngineering practitioners are Luddites: They make good use of computer automated tools.

We also don't seem to have any clear comparison of CleanroomSoftwareEngineering to TestDrivenDevelopment. -- JeffGrigg


CategoryMethodology


EditText of this page (last edited January 2, 2010) or FindPage with title or text search