[moved from WaterFall]
Proposition: WaterFall is discredited as a literal software process that can be followed.
The waterfall (gather requirements, analyze them, design a solution, implement it, test, deploy, maintain, once only, in that order) was discredited as published. Royce gave this model as a starting point for thinking about any sort of development process at all, but he then immediately points out that it is almost guaranteed to fail in any real development situation.
There are still people doing WaterFall.
Does anybody have data to back up that last assertion? I question it.
I would suggest the following as examples that Waterfall is alive and well. The Software Engineering Institute (SEI) CMMI documents (http://www.sei.cmu.edu/publications/documents/02.reports/02tr029.html), the SEI Personal Software Process and Team Software Process (PSP and TSP - http://www.sei.cmu.edu/tsp/publications.html), the Software Engineering Body of Knowledge (SWEBOK - http://www.swebok.org), the Project Management Institute (PMI) Process Management Body of Knowledge (PMBOK - http://www.pmi.org/info/default.asp)
There's a counter shout that goes something like: "get your head out of your inbred development sector and look around." Many of us get myopic from time-to-time when our head has been down for a while. What can look like some other group hiding a neat methodology might say more about how deep our particular rut is. -- DaveSmith
Dave (Smith), are you saying that nowadays a considerable percentage of new software projects with medium to large budgets are employing ED (?), XP, DSDM (?) or some reasonably incremental approach? What do you think the percentage is worldwide for projects with an overall cost of over $200k, say? -- RichardDrake
Rather the contrary. I'm saying, perhaps too obliquely, that many narrow sectors tend to do today what they did yesterday (e.g., waterfall, or maybe ED), in part because they've insulated themselves from new ideas by virtue of overspecialization. It takes longer for ideas like XP to penetrate deeper ruts, in much the same way that it used to take fashion trends a few years to work their way into the Midwest from the East and West coasts. -- DaveSmith
Being a transplanted Midwesterner, I have to admit that the flipside to that analogy should be considered. I am thankful for the insulating fashion buffers of the Appalachians and Rockies, which I believe probably resulted in many fewer childhood and highschool pictures my children will laugh at (my own questionable tastes aside). Likewise, never completely discount the worldview that always waits until Version 2.0: there's a lot of mistakes to be made, and sometimes it's best to let someone else make them. Sometimes, these reservations are better overcome by questions like WhenToUseXp than WhyUseXp? -- DavidSaff
There are a few of us out there speaking and practicing the incremental dance in front of project sponsors. It is a long, slow dance and must be done face-to-face, typically several times, which means there aren't enough of us dancers out there to cover the needed territory. Writing doesn't much help, although we write, too. WaterFall is AnAcceptableWayOfFailing, so the only way they learn is by getting dragged through it and then saying, "Oh, that wasn't so bad... maybe next project or the one after, we could try it again." (P.s. telecom are among the slowest to try, right next to insurance and just ahead of utilities.) -- AlistairCockburn
The mantra I've used in this situation is to point out the indisputable fact about WaterFall, everything is big including the risk. -- MartinSpamer
Hmm. "Indisputable fact," quotha. There have been many projects I have worked on that needed BDUF in order to succeed at all, so your assesment must be based on your experiences. I'm sure that all of us can do a little research and find all kinds of studies that back up one assertion or another. But for any of us to make a SweepingGeneralization about this or that methodology is akin to making a religious statement, is it not? -- MartySchrader
Is the cost-of-change curve broken?
From near the top: Does anybody have data to back up that last assertion [that the cost-of-change assumption is broken]? I question it.
To an extent, the cost-of-change curve is fundamental fact. AlistairCockburn wrote a wonderful paper dedicated to proving that [http://www.xprogramming.com/xpmag/cost_of_change.htm].
Sadly, the referenced paper contains no facts or analysis, just a couple of made up stories. Anything can be "proved" with made up stories.
However, in some cases, the cost-of-change has been reduced. For example, it's usually a lot easier to replace the database these days, due to increased standardization. Additionaly, a good software design will identify "costly" decisions and isolate them to one corner, explicitly to reduce the cost-of-change. One of the arguments in XP is that refactoring will lead to exactly this sort of design. However, all of this could be done in a waterfall process as well.
It seems to me that the logical conclusion of the cost of change curve is that it is not finacially possible to have multiple releases of a product. Assume a 5 phase development sequence (RADIT - Requirements, Analysis, Design, Integration, Test) with the factor of 10 increase per phase. The table belwo shows the cumulative effect of order of magnitude changes on cost.
A $1 change in the initial requirements phase would cost $100,000 in the second requirements phase, and $10,000,000,000 in the third requirements phase. At $100 per man-hour, a $1 change must be accomplished in 36 seconds, about the time to start up a word processor and correct a typographic error. A 6 man-month project at $100 per hour is about $100,000. Therefore, it would be cost effective to rewrite the project from scratch versus going in and correcting that single typographic error in the second pass.
In reality, the cost of change is pretty flat. If the typographical error is immediately caught, the cost of change is probably close to $1. If the change is caught later, the cost is probably closer to $100 for the software and for each unique document containing the error. Still the total cost would be less than $1000 and would not continue to increase. -- WayneMack
Cost to Change Table - Assuming Order of Magnitude Cost Change per Phase:
Requirements Pass 1: $1
Analysis Pass 1: $10
Development Pass 1: $100
Integration Pass 1: $1,000
Test Pass 1: $10,000
Requirements Pass 2: $100,000
Requirements Pass 3: $10,000,000,000
I'm with Wayne. My experience has been that the cost of change is almost unique to each project and each change. Some things are easier to predict than others, but in embedded systems you often can't tell the cost effect of a change up front until the ripple has taken place. If you are designing a new widget from scratch then you have a lot less data to work with when predicting these costs. However, the change costs are never as staggeringly high as this order of magnitude "heuristic" indicates. -- MartySchrader
Wouldn't the curve reset at Requirements? --PeteHardie
If so, it would cost less to fix a problem after deployment than before it. -- WayneMack
It does. It is cheaper to fix in release 2 than to fix in the wild, so long as the cost of living with the bug is tolerable.
> Requirements Pass 2: $100,000 > Requirements Pass 3: $10,000,000,000
So does this mean that if a software ever reaches to version 10, it would cost around 1,000,000,000,000,000,000,000,000,000,000,000,000 to change a typo in requirement? I think MacOS or Oracle have reach version 10. And I'm sure there is some requirement changes between each of that version. Yet I don't believe they are doing that much of profit to accommodate that cost of changes. Therefore I think the cost of changes curve you have was not quite believable.
Secondly, is it possible that cost of changes is high because you develop in high cost of change manner? I mean is it possible that all those cost of changes report said was that cost of changes of project doing water fall process is increase over time if the changes was to be made at later state. So we should use Water fall to avoid that cost of changes. Wasn't that kind of circular reasoning? Waterfall process was set up so that it's hard to make changes to the system; You have to change from Requirement->Design->Develop that's why it's costly to change. I think that all XP was trying to say. You know changes are going to be there, so don't develop in the process that goes against it and it won't be costly. I think most of the factor that cost a lot if making mistake in software development is not the software itself; It's the hardware running software. As long as data was kept electronically, transition from one software to another, provided that you have the tool, was not that costly.
As of two years ago, I worked for a major telecom company, $12B a year in revenue, that was Waterfall all the way. The Waterfall methodology was implemented not to do successful projects but to maximize IT control over the company, specifically by the IT business analyst division.
The result of the Waterfall was that project got done to an exacting spec, so all failures were blamed on the originating business unit.
-- EdFreeman?
The later a problem is found, the greater the cost of repair. The earlier a problem is to be found, the greater the cost of analysis. Pick your poison. -- BenAveling
Not necessarily if you write WellFactoredCode. Keeping the design clean and flexible alters the CostOfChangeCurve
To play devil's advocate:
The fundamental problem with Waterfall was not the process itself but the Functional Specification that drove it and the ways it was made and used.
Functional Specs are the worst way we know of to describe software. Anything you'll find in a traditional functional spec is more easily, more completely and more accurately described with one or more of the other tools we have for describing software requirements and designs.
Use cases, interaction storyboards, business objects and other product views instead of a functional spec don't guarantee successful waterfall projects, but they eliminate the primary causes of misinterpretation that once doomed them.
The other problem with waterfall was that the initial phases were usually executed by programmers and analysts whose skills and desires were oriented to writing code. This usually resulted in less than stellar requirements gathering and design.
A related problem is that design is usually (tragically) associated with development rather than requirements gathering. The external design of the product is the most important medium for communication with the client and for making sure that everyone has the same understanding of the intended product.
My point is (if I have one) that Waterfall is not in and of itself a failed approach. With modern tools for the initial phases, used by people who are not itching to "get on with it", waterfall projects have a chance of succeeding. The key is that you aren't doing it right if the product objectives and definitions change during the construction phase--you didn't put enough effort into the front end and didn't ask enough questions.
The underlying assumption of XP and agilism, etc. is that the client is too stupid to clearly articulate their need in advance. Sometimes this is the case, but not usually. The problem is analysts and programmers who lack the wit or the will to get answers to all of the relevant questions before starting construction.
-- MarcThibault
I thought the Agile assumption was that the requirements were not stable over the time span of a waterfall period(A fairly unusual situation). This could be due to the customer not knowing the domain, or a marketplace that is in flux, or any number of other reasons. Agile methods reduce that cycle time and increase the feedback up the chain in order to explore the requirements.
That said, I'm sure there are plenty of problems for which the requirements are stable, so that a waterfall method is appropriate and efficient. -- IanOsgood
No, that's not the Agile assumption. Yes, it's often (usually) true. It's also true that, as you incrementally develop, you learn more about the requirements and the design needed to meet them. Iterative always works better than waterfall, as commonly envisioned (though it should be noted that it was originally envisioned as a waterfall per iteration).
I think the main people who "discredit" the WaterFall methodology are those who stand to make money from management consulting. I think the main argument against the WaterFall is that SoftwareIsBroken? and everyone uses the waterfall and thus the waterfall is broken. I think it's a PostHoc? logical fallacy. The fact that the hundreds (!) of methodologies that have been created to replace the waterfall have not made software development a really easy process argues that the two propositions are correlative and not causative.
Waterfall has been present in some form at many of the places I've worked (finance houses). It's a management thing - and understandible to a degree because you sometimes you have to secure a budget for a project before doing any work. eg. we're going to need 2 consultants, 5 developers for 6 months = �xxx. This sounds fair enough, but can give rise to an unfortunate chicken and egg situation because until you've got the requirements, the design and the architecture you cant estimate how many man-days of work you need to implement it (lets leave that man months are mythical out of it), or what hardware/licences you need etc! The most hiliarious place I worked at followed this model to the point of absurdity. I saw year-long project plans with hundreds of tasks on them before the (internal) client was contacted. Changing the plan involved more beaurocracy escalation through project committees etc so that was an option of last resort. Needless to say the projects always went 'to plan' and this was seen as a great achievement. Of course every project was at least 50% late. I fondly remember a 6 week project being 6 months late, and due to compromises to get it delivered at all, the final deliverable was functionally equivalent to the system it replaced. Often the client only got involved at the UAT stage anyway, where a big political thing would kick off because the business thought IT were a bunch of useless wonkers who couldn't deliver anything relevant and did so at great expense. The business were a good bunch too - 4 levels in the reporting tree, 7 people?? Did I mention it was one of the biggest banks in the U.K.?
To some extent, it is a practical impossibility for the Waterfall method to work on large systems. The deliverable is a moving target because everything it depends upon is changing. It makes no sense to expect the Waterfall method to do anything other than waste money roughly in proportion to a non-linear function of the size of the project and its quality in early stages. In my mind's eye, I see the wastage as an area whose size is proportional to the area described by a triangle whose base is proportional to project size and whose height is proportional to cumulative error (the acute angle being a sort of measure of how good your aim was). A large error up front means a guarantee of failure. A large project means a guarantee of disproportionately large wastage, unless your aim is near perfect.
Even in situations where you have something that can be clearly specified (say, replace the existing function of our text based Cobol banking system on the mainframe with a GUI based system on a LAN), you still get, IMO, less bang for the buck than if you proceed in iterative fashion with the involvement of all stakeholders. The reason for this is that even though you can specify very well how you can translate CICS screens to a GUI (once upon a time, I helped write an application that used HLLAPI to help build a GUI front end to such a system), what you can't do is maximize the 'bang for the buck' that is available in the new paradigm. CICS screens on 3270 terminals backed by DB/2 databases have some fundamental differences from GUI screens on workstations backed by distributed systems. You can't, for instance, have multiple application screens open and cut and paste on a 3270 terminal. If your system is non-trivial in nature and size, WaterFall is near guaranteed to at least under-perform more realistic methods.
In the above scenario, there is no way out of the box. If you specify exactly, you will exactly specify the wrong thing. You will successfully build it, but it will be a perfect instance of an inferior product. If you attempt to design a product that takes advantage of the target platform, you begin to accumulate error in your 'aim'. 'Big Bang' methods simply do not work and I can't think of even a single instance of software that I ever use that was developed that way.
One thing that I *do* believe is that most methodologies that have any sort of chance will have some organization that looks (even to the smart guys here) like the WaterFall. There has to be some triggering event and rationale to start a project, even if it is just 'because I think it would be fun'. Some discussion, specification, gathering of requirements and identification of aspects of the project must be done prior to rolling up your sleeves and coding, even if it is just brain-storming on a napkin. [As an aside, it is fine, in my opinion, to go straight to the compiler and start coding throw-away stuff to aid you in doing the design. Just remember not to put that code into production] The logical place to test units is at the time you make them, the logical place to test subsystems is at the time you make them (and after the time you build the units that compose them). The logical place to do build is when you have enough subsystems to make some kind of system to build. The logical place to do system test is when you have built a system and so on. In TheReceivedMethodology, I simply use terms that I think most experienced developers are familiar with. Whatever you call them, those are the main stages that give rise to software and eventually result in its retirement. I have worked in a wide variety places, and I don't think it is particularly idiosyncratic. What differs in that Methodology from WaterFall (everyone who looked at it could not see the difference) is that the aim is to progress rapidly to prototype in concert with stakeholders and to constantly go back to prior stages if problems/opportunities are detected to identify the stage where the deviation became manifest and correct it. The problem with the WaterFall is that it follows the logical sequence of steps in a series of nested loops but exits all the loops before they are done.
The Waterfall Methodology was still-born. I have worked on a few projects where it was attempted. The only way that we could deliver was to deviate from plan and basically 'cook the books'. I think that is how the other Methodologies gain any traction. They don't do what they actually say they are going to do. Clever elves work their magic with development tools through the night and then dummy up their timesheets to show they worked to plan. Most of the projects I have seen stick slavishly to a Methodology have failed to see full production. Similarly, though, most of the projects with no 'Methodology Skeleton' upon which to build their system also failed. -- BobTrower
Waterfall, XP, Agile are just process tools/frameworks that help us PMs and the teams get work done. It seems counter-intuitive to me that when a professional or professionals experiences a project failure then the tools they used must be blamed.
I have seen waterfall work well when used in the right environment with an experienced and steady hand. I have also done Agile projects that work well under the same conditions. Bottom line: PMs need every tool they can get and blaming the tool will tend to empty the toolbox. --EvansTravis? (et)