Could this be the root of all evil in S/W development?
Note: this page isn't about BugFreeDoesntSell, it is about BugFreeCostsMore. What if that weren't true?
Sticking my most cynical hat on here, and I hasten to add that we do not follow this practice in my company, a new company wishing to launch a product into the marketplace might draw up the following list of priorities:
The CriticalBug concept results in releases that aim for the point (4) that I added. -- JohnCarter
But you ain't gonna have a 'long run' if you don't have a short run first.
Your evil (maybe) marketing man tells the engineers: I cannot put a feature on the product description if you haven't implemented it. I cannot put it in if you haven't at least tested it well enough to get past point (3). However, if the feature doesn't work particularly well that doesn't matter to me. If this company is going to succeed then we need whizzy features quickly even if they haven't been fully tested.
This sort of attitude tends to murder most of what ExtremeProgramming is all about, because it is basically anti-quality. In particular, engineers will leave all testing to the end of the development cycle because compromising on testing is more important than compromising on features. When testing does take place, it will inevitable be TestingByPokingAround, and CodeUnitTestFirst seems like total madness (nobody buys UnitTests). With testing left right to the end there's no longer any point in short iterations. We're back to TestLast and the evil old ways of doing things.
I expect most companies are guilty of this sort of prioritising at some point in their lives. With a startup company I haven't really got an argument against it, you don't win market share by being ethical. However, I think BugFreeDoesntSell is an attitude virus which spreads into organizations and projects which don't need to think along these lines at all. It works on the fear that you might spend too much time testing and miss your market window or fail to beat your competitor (on unethical grounds).
It's there, and we have to face up to it. My only defence at the moment is to fight it when it's there by default rather than there with justification.
This sort of thing is common in the simulation industry, too. A contract is made with the government to provide software with certain features on a certain date. If you deliver software that can be said to have all the features, even if they don't work for the most part, you will get paid. If you deliver software with less than the required features but each feature present works perfectly, you do not get paid.
Having worked in a venture capital startup recently, JeffGrigg would suggest...
ExtremeProgramming techniques can be a big win for venture capital startups:
Having worked in a venture capital startup recently, JeffBay would also suggest...
ExtremeProgramming is perfect for startups; specifically, bug-free software doesn't sell. However, due to the RatchetEffect, if you can develop software faster because it is low-defect, then even though the marketers may not need bug-free software for the customers, if they are sophisticated enough to understand the ratchet effect, they will still want bug-free software.
To put it another way... the customer is always right. -- EricUlevik
That's only half the story. The other half is "Don't work for wrong customers", meaning that if they want you to do crappy work and doing it offends you, don't do it. Keep your standards and your character. -- WaldenMathews
The assumption here is software, by definition, is buggy. One of the the assumptions behind CodeUnitTestFirst is that you can write bug free software. The concept is the software is always at a fully functional (as opposed to fully featured) state at all times. Fixing problems after the fact is painful to all those involved from programmer to end user.
ExtremeProgramming is about the only methodology I have seen which actually tries to prevent the creation of bugs, not merely focus on trying to remove them later. This is the true meaning of quality.
I am a strong believer in the idea that It is at least as easy to create good software as bad software. And writing good software is the only way to end the time consuming CMM/ISO procedures documenting bug removal.
See: WilliamEdwardsDeming
-- WayneMack
The assumption here is that code takes longer to develop and test thoroughly than to develop and test minimally. Writing bug-free code does take longer (in the short term) if only because of the extra time needed to verify that it is bug-free. Eventually, of course, you pay for your short-cuts. -- RichardDevelyn
I think it depends upon what you mean by short term. In most settings, I would categorize short term as being the next release. One of the most unpredictable and time consuming phases of most releases is the integration test phase. Relying on verification after the fact is the cause of this. The assumption of buggy software is what leads to the attempt to test thoroughly. If you can get beyond the belief in buggy software, you start to question the notion of 100% testing. It's a difficult leap, but one software as a profession needs to make. -- WayneMack
ExtremeProgramming is not the only methodology that prevents insertion of bugs (rather than trying to find and remove them later). BigDesignUpFront is, by its very nature an attempt to avoid making big mistakes (big bad bugs). CleanroomSoftwareEngineering, and any approach that has design or code reviews, is a serious attempt to eliminate most (if not all) bugs before the program is run the first time. -- AnonymousDonor
One of the the assumptions behind CodeUnitTestFirst is that you can write bug free software..
I have to disagree with this. I think BugFreeSoftware is an ethic that is ultimately un-provable. I think we should always try to write bug free software and CodeUnitTestFirst is a good way to show your allegiance to that ethic. However, it is an impossible goal or maybe it is better to say that it is a goal whose achievement cannot be proven. Someone here brought up the idea that Knuth talked about PerfectableSoftware as an alternative to BugFreeSoftware. -- RobertDiFalco
I think most sane developers and managers don't intend to write software that is buggy. Bugginess is usually a product of:
However, most developers and managers expect to write software that is buggy. The waterfall model has BigDesignUpFront and a large integration test finish. The whole concept of doing more and more checks, e.g. design reviews, code walk throughs, UnitTests, system tests, etc. is based on the belief of a high number of problems in the software. As the error rate falls, these steps cease to be cost-effective.
I fully agree with statement 1 above, I just believe it is an industry-wide problem. As programmers, we do not know how to write quality software. This is why I find the practice of CodeUnitTestFirst so interesting. What you are doing is a micro-level analysis of a class's inputs and outputs before you begin coding. This level of analysis, I believe, far exceeds what is expected of something like DesignByContract.
-- WayneMack
I fully agree that many developers and managers expect buggy software, but I think it's only because they haven't known anything better. These people are often allergic to the techniques of quality work (like kids are allergic to sitting still for five minutes), but notice that they can't argue with your superior results!
When in a quality hostile environment, software developers could try to unite around a very basic principle that I call FixBugsFirst. I wonder why they don't. -- JeanHuguesRobert
Are there any real metrics for defect rates in XP? One thing I find irritating about XP, at least as preached by the more noisy advocates, is that it leads to bug-free software. When I examine these claims more closely, it seems that they re-define a bug to mean something that fails a test, which seems a pretty poor definition of a bug to me. I'm not sure how you'd really measure this - are there any commercial or widely deployed, public applications developed with XP that you can check defect rates from version to version? Test oriented development is very interesting and appealing to me but I'm not sure that it'll result in objectively lower defect rates.
Are there any real metrics for defect rates in any methodology?
To be honest, XP does re-defined the word "bug" to reduce its defect rate, but for a sane reason. The usual definition of "bug" is "behavior you don't like". XP breaks this down into "behavior that the onsite customer doesn't like" (dev team's fault) and "behavior that the onsite customer likes but you don't" (onsite customer's fault). Basically, XP denies all responsibility for business decisions. As far as eliminating the first type of bugs, TestFirstDevelopment? reduces but can't possibly remove all bugs. TestsCantProveTheAbsenceOfBugs, after all.
Oh, please. You sound like top trying to avoid discussing hard science in software development engineering. As has already been pointed out, there exists a large body of research into the science of defect reduction and prevention. Trying to pretend such science has not already impacted our profession is like claiming the sun doesn't rise in the east.
And as for defining "bug," how about the simple definition that the late Phil Crosby came up with decades ago: non-conformance to requirements. If you can define what a product (piece of software, whatever) is supposed to do and it doesn't do it, then that's a bug. Simple.
See: FirstLawOfProgramming, TechnicalDebt, WorseIsBetter, AllRoadsLeadToBeeMinus
Contributors: JeffGrigg, JohnCarter, RichardDevelyn, JeffBay, EricUlevik, WaldenMathews, WayneMack, RobertDiFalco, GlenStampoultzis, MarcGrundfest, JeanHuguesRobert, MartySchrader