Waterfall Model

For a mostly cynical treatment of this model, see WaterFall. The original WaterfallModel is defined in:

[Royce, Winston W., Managing the Development of Large Software Systems, IEEE 1970.]

NOTE: the first sentence in this article after the waterfall picture is "...the implementation described above is risky and invites failure." The next few pages are all about how feedback and iteration are necessary for success. - DanRawsthorne

The WaterfallModel abstracts the essential activities of software development and lists them in their most primitive sequence of dependency. Real development projects (software and other) rarely follow such a model literally, mainly because the model can and is applied to itself recursively, yielding an almost fractal fabric of actual activity.

The verbs of waterfall development are 'analyze', 'design', 'code' and 'test'. Some people insert 'specify' somewhere in there probably between 'analyze' and 'design'.

In the abstract, here's why WaterfallModel is always correct. Problems tend to be fractal, i.e., solving them tends to reveal sub-problems, and solving them tends to reveal sub-sub-problems, etc. Pretend for a moment that there is such a thing as a simple, atomic problem. In that case, the arguably best sequence to solve it is the waterfall. The reason is that each step provides the most reasonable platform for the next one. Simple problems lend well to simple Waterfall approximations.

Although the WaterfallModel cannot (should not) be used literally as the prescription for building software, it is highly useful for reminding developers of the following:

In addition, although no significantly difficult development can be forced into a simple Waterfall, beginning with the Waterfall is a good way to sketch a plan for the development. It will likely be too simple, but it's a good strawman, and usually a good, durable framework for the real plan.


Discussion:

The counter-argument to the idea that even a short WaterFall is MostlyHarmless is TestDrivenDesign. Thou shalt test, code, design, and analyze; this put things in the order that makes each easiest.

But how do you decide what to test first? The test is an executable syntactic and semantic specification of the code. So in TestDrivenDesign the developer should probably think about the specification first. This usually follows a (potentially very short) period of deciding what is needed (read analysis). So the two sequences are not completely different.

In process terms the advantage of TestFirstDesign is that there is fast and concrete feedback between building and testing. --TomAyerst


Some observations on the Test First sequence. In your description above, you say Analysis is potentially very short. I wonder if this correlates to the backtracking and need for double Proof. I think the process you have traced incorporates a silent bit of prototyping, based on uncertainty of requirements, that affects the rest of the process as follows. When you Specify (I think that means to write the tests that will be run in the Prove step), you are stating how to verify what you think the requirements may be, having underinvested in their discovery (possibly for a valid reason). When you Build it without a Design, you are saying that because the requirements are suspect, let's not make too big a commitment to this code just yet. That's prototyping. When you Prove it (first time), you're getting requirements feedback. Why couldn't the requirements be verified in a tighter loop that didn't involve coding? I can guess one answer -- this may be the only way your users can work, making decisions in front of an operative example. (But I'm analyzing too much -- the thing is that sometimes we prefer to use a prototype to help with requirements work.) The Design step that follows assumes the requirements were right, although if that were a safe assumption, the Waterfall would likely have been a better choice of model. So an explicit decision point just before Design would clarify the Test First model, because you're half expecting to abort there for another prototyping round. The final proof is a regression test, since requirements satisfaction has already been demonstrated.

The similarity of Test First to Waterfall is so obvious that I'm led to believe they are the same model with a prototyping Waterfall inserted (appropriately) right after the initial analysis. Not visible are the analysis (requirements) and specification for the prototype itself, and that's because they are so similar to those of the end product that the two sets just blur together. Also not visible is the Design for the prototype. For sure, there's no document, but it's less certain that there is no design effort at all, as its hard for a competent designer to write code without some eye to even its temporary structure.

What you're calling Test First is, I think, pretty commonly used and called Waterfall, not to mention that it's also very much in the spirit of the Royce paper. Winston Royce was trying to lead us away from too static an approach to development, where risks were discovered too late and big rework costs resulted. Maybe he was also reacting to what people here on Wiki refer to as WaterFall, the version of his day. Things haven't changed much, except that we're now shooting the messenger (Royce).

In the above discussion, in addition to understanding the Test First process, I'm also hoping to show how the WaterfallModel serves as an abstract essence from which other more concrete models or processes can be produced. It's a useful analysis tool. I want to clarify here that my intent is not to try to turn back the tide of feeling about rigidity and the WaterFall name, but just to reawaken the model itself as a valid tool. My fear is that the unfairly acquired negative image will close the doors to good process design and analysis. I'll try not to beat that point to death, as I am inclined to do.


I think I agree but I suspect there might be two or three, usefully distinct, models to chose from.

What are they?


For further discussion, I'd like to add that the popular XP practice of "write tests first" is a requirements activity, not a test activity, even though it sounds like one. It is an excellent requirements practice, and one that is by no means new with XP, even though it sounds like it is. ;->

I would call it a specification activity (but there we go into semantics land again) and I would agree its an excellent practice. I haven't seen it used in quite this way elsewhere but I wouldn't be at all surprised. --TomAyerst

Maybe it's both (requirements and specification), and I don't think that's just semantics. Coding the test forces a specification, but describing the desired test outcome and its meaning forces the requirement. That second part, by the way, seems the more tricky, and I wonder how much XP eXPects in that area, having never really done XP.


Although I agree with the development concepts you are describing, I don't believe it is appropriate to call it the WaterfallModel. The WaterfallModel has long been associated with single pass software development procedures and to try and redefine it as a multi-pass, iterative approach is misleading.

The single pass methodology is deeply ingrained in software management and a separate term is needed in order to compare and contrast single pass and multi-pass approaches.

None of these fit with using multiple, very short development passes.

Iterative development is much different than a scaled down WaterfallModel. To confuse the terms obscures how far reaching those differences truly are.

--WayneMack


Wayne, your first point is well taken, and I've incorporated my acceptance of that nearer the top of this page. I would never try to characterize the WaterfallModel as an iterative approach, partly because that would be using a 'model' as an 'approach', and I think they have different meanings here. I will note that I frequently hear the term "multiple mini-waterfalls", and I do think that's exactly an iterative approach.

Your middle paragraph has a number of problems for me. Let me whack at a few of them and see if anything good comes of it. I never worked for a management that pushed waterfalls (so how did I get this way?), so you can't characterize all managements that way. If you mean software management courses, I have to again take exception. Even the SEI doesn't teach single pass life-cycles anymore, if they ever did. As for Gantt chart, it's true that you can't represent a loop in one, but that's okay. You can draw the first one or two iterations, no problem. And you wouldn't want to draw beyond that horizon anyway, so why try? As for requirements document templates, I hate them! But the same advice for Gantts applies. As to separate documents, how you partition that stuff into physical documents is not a very essential concern to me. I don't see why it would be hard. Is it because they would be small?

The major difference is the atomic work unit used. In the WaterfallModel, the units are requirement definition, analysis, design, code, and test. The concept of a use case, user story, or function is subdivided and lost. In an iterative approach, the unit of work becomes user stories or functions. The separation between requirements, analysis, design, code, and test becomes lost. That is what I tried to allude to in my reference to Gantt charts. Instead of showing the requirements phase, etc., you have to show iterations. For each iteration, you no longer have sequential completion of requirements, etc., rather these things are completed in parallel across sets of functionality.

This also applies to documentation. In an iterative approach, the details of a user feature become revealed all at once. This is what makes it so difficult to create separate requirements, design, and test documents. You find they all contain identical information.

My position is that all processes are composed from Waterfall components, essentially in Waterfall order, with certain steps elided for reasons that make sense within the specific context (see my analysis of Tom's Test First example above). You say that "scaled down Waterfall" is radically different from iterative approaches. I may have a bias that lets me see the similarities better than the differences, and I would appreciate it if you could point those out in some detail for me.


I think you may be reliving the hacker/cracker experience. In the methodology world WaterfallModel is now used as the label for one-pass, low-feedback processes. It didn't start that way, and it may be unfair, but it is now a useful label for one end of a spectrum. Trying to rehabilitate it may be a forlorn hope. --TomAyerst

I'm trying to downplay my crusade emotions in favor of information, so I'll accept the hacker/cracker pointer at face value. Waterfalls work for simple projects (prototypes for example), and in some cases it's just the simplest thing that could possibly work, even with minor misapplication (you know, WorseIsBetter and all that). I come from a software culture where hacking rules the waves, and starting education simple and stying uncluttered means talking Waterfall terms. Walden's Theorem: A Hacker trying to use a Waterfall ends up doing Iterative development, while a Hacker you tried to teach Iterative development to glazed over and went back to hacking. I've heard all kinds of excuses why "you can't do requirements" where I work, and they're all BS. In that environment, Waterfall doesn't say much, but it does say "Did you do any requirements work?", and that's something.

See Gerald M. Weinberg, Quality Software Management: Volume 4 Anticipating Change, 1997, Chapter 18: Sustaining Projects Correctly.


Interesting stuff about model/method. A model becomes a method when it is implemented precisely?

In general form, I can't answer that question. I think you can use Waterfall essence generatively to tailor a process for developing a particular piece of software. I think of methods as being an even more detailed treatment of process steps, detailing the artifacts and possibly the tools.

Does this mean that you are doing a process which is defined by a method which is based on a model?

I dunno. Let's see what the dictionary says. 'Process' is Latin, being the act of moving forward. 'Method' is Greek, being the 'way beyond'. You might say the method is the path, and the process is the act of following it, but nobody else is going to be that precise in their interpretation. By the way, if the process is the act of following, then what do all those process lemmings mean when they say 'follow the process!'?


[Wayne, I'm continuing at the end of the document on the theory that continuing inline leads to more bifurcation of thought, whereas doing this helps focus on the main points and maybe get us to common understanding. By the way, if we sign with initials, we'd better use all three. --whm]

Use Cases, Functions and (now) User Stories are all, as far as I can tell, ways of representing requirements for a system, the first and last being the more 'scenario-based'. There is no reason why a Waterfall approach would compromise the integrity of these forms, even if that approach was inappropriately large-grained, which is usually the complaint. So, I need more info on that one.

The separation between requirements, design, code, etc., need not be lost if you are willing to observe them, although I have to agree that in some modes they are quite fleeting. In a maintenance review group I used to lead, we got into the habit of characterizing requests into the above categories, depending on where we saw the problem beginning. So I argue it's all there, but you may not care to see the separations, and that's not necessarily bad. Unless you're a methodologist, in which case you have to be able to see them.

The appearance of these activities proceeding in parallel is an illusion, I'm convinced, created by the fact that there are several problem-solving cycles active at the same time. But for a single function or problem, if design and requirements are literally done in parallel, then the design was largely done without a requirement. Which means you are about to discover the requirement through backfiring of the design (and code). This seems to be a preference among XP followers (help me if I'm wrong), and it appeals to minds that are more comfortable solving in concrete terms than in abstract ones. But it's not the only way. Analysis techniques can get you to the finish line faster, but as you know they have their own risks attached.

Requirements, design and test documents shouldn't all contain the same information, although test and requirement should be very closely related. I'm in favor of not producing lots of duplicated stuff. I never forced programmers to write separate design documents, because all the information needed could go in the source header section, where it's most useful. Similar optimizations can apply to requirements and test documents. If you can write tests, you've written requirements (and possibly specifications, too), so no need to duplicate them just for the sake of killing more virtual trees. However, we digress from our discussion of WaterfallModel, which bears on these issues not at all.

To reiterate a prior assertion, a key difference is the unit of work. In a waterfall approach, when doing functions A, B, C, and D, design for A is held hostage to developing the requirements for B, C, and D. In a truly iterative approach, A can proceed independently of B, C, and D. This is also changes how scheduling is done. You lose milestones saying requirements, analysis, design, implementation, test, and what have you are to be complete. You gain milestones for when various functions are to be complete. Thirdly, this affects documentation. Detailed analysis leads simultaneously to design detail and test detail. It makes sense to capture these on a function by function basis in a common document. To do otherwise, you find yourself cutting and pasting similar, overlapping sections of text in multiple documents. Try thinking of what the unit of work will be in different approaches and I believe you will start to see some fundamental differences. --WayneMack

I agree with the gist of what you're saying above, and want to know whether within a given function, say A, you see a 'mini waterfall' as a desirable thing. We're dangerously close to the model vs. method argument again.

The waterfalls I've seen drawn as Gantt charts have big overlap among the phases, which is a very imprecise way of saying that you don't hold the design of A hostage to the requirements of B, C and D. This is a hard subject. I feel there is a time to be disciplined and stick with requirements, and there is also a time to do the opposite, and I can't make a general statement about which is right when. Perception of risk plays a part; so do team skills, management expectations, and probably a whole list of other things.

I had my "waterfall doesn't work" epiphany back in 1994, so if you're trying to get me to start to see some differences, you're going to run into a problem. I started this page because I see a timeless quality in WaterfallModel, if used thoughtfully, that has no equal in our industry.

--WaldenMathews

The overlapping you see in the Gantt chart is an indication of concurrent tasks, not dependent phases, at the level you are looking at. At lower levels, the same thing occurs; the tasks overlap. This is repeating the fractal analogy at the top of the page, however, I believe the true conclusion of that view is that although several tasks or activities may be present, they are not sequential phases.

--WayneMack

Depends on the Gantt. MS Project defaults to a Gantt display but it supports PERT type dependency as well. We have to get specific here, or give up. Here's my thesis. Given a requirement R and a body of design D that satisfies R, and assuming a goal oriented approach to development (or call it a feedback oriented approach, or something other than raw trial and error), it's impossible to complete the task of producing D until the task of producing R has completed, for any given version of R.

Even if you happened on the right D before you finished R, there is still the decision R that precedes the decision D, as it isn't possible to verify D without R. In essence, this is what the WaterfallModel says, and regardless of whether R and D are represented as documents, one specifies the other, not the other way around, and going bass-ackwards does lead to problems.

The big scale Gantt is showing you both concurrency (requirements developing in parallel with each other) and dependency (design completions depending on requirement completions). To see the dependency more clearly, note that the trailing edge of phase n+1 never precedes the trailing edge of phase n, although other leading-trailing relationships are fairly unconstrained. -- WaldenMathews

How do you determine the completion of requirements definition, development, or testing? Development provides verification that the requirement was completely defined and testing provides verification that development is complete. The end of a requirements or development phase is indeterminate until the end of the testing phase. --WayneMack

I'm going on the premise (one which I believe) that it is possible to define (or specify) something without building it. The durable product of that activity can be called a specification, and it's value can be verified independent of the product it describes, as follows: "If we build the thing here specified, will that product be acceptable to you?" "Yes.". Let me know if this violates some natural law you are aware of. I think we have to take this one step at a time, as I found it very hard to understand your last paragraph. -- WaldenMathews

Language is imprecise, people are imperfect, and written specifications are done at a much higher level of abstraction than the implementation. It is very difficult for reviewers to see what is missing from a specification. A signed specification does not ensure that what is implemented will necessary meet the client's expectations. I view the ongoing decision making involved in resolving those details as specifying what the end product does and needs to do (ideally, these two should converge). If you view specification as an activity that ends with a milestone of a signed specification, that is fine. I tend to view specification as an ongoing activity that is interrupted with deliveries of code and documentation. Rarely is anyone satisfied with a product that merely meets the written specification. Further specification is needed to fill in the details. -- WayneMack

Understood, that's why (above) I said "for any given version of R". Let's be precise about abstraction (does that sound like a contradiction?). When I abstract (withdraw) from the solution "machine" so that I can only see the parts I will interact with and their external behaviors, then I have the subject matter for a specification of the system. There's nothing about the abstractness of that to prevent me from being as precise as I need to be. It's a fallacy that specifications must be imprecise because they are abstract. When you say "it's difficult for reviewers to see what's missing", you are giving me a pointer that specification and review skills are largely undeveloped, but you are not showing me the violation of a natural law.

As you develop something ... anything ..., what is it that guides you toward the correct result at any given moment? My thesis, again, is that that "thing" can be expressed, evaluated, improved, etc. But it depends on your ability to make it fully conscious, and then to express it. You may not wish to do either, but that's a different matter. -- WaldenMathews

I think we may be converging. The point I was originally trying to make was not that the WaterfallModel is invalid, but rather an iterative approach is a vastly different model. The idea is that there is a model where requirements, specification, analysis, development, test, what have you proceed in parallel. There is an approach where development can uncover holes in specification ("What exactly should I do when this happens?"), allowing development to validate the specification, much like test validates development. The issue is not the value of the WaterfallModel, but the existence of other models. Indeed, if there is only one model, then it is meaningless to talk about its value. I trust I am not reading too much into the grammar and syntax in the discussion above, but I believe there is a shift towards discussing the value of various models rather than the belief that other models are merely the WaterfallModel in disguise. Is this a fair analysis? -- WayneMack

I think it's valuable to do both, but in particular I think it's valuable to overcome the current cultural bias against Waterfall in order to more clearly see its base class contribution to all our thinking on software process. It's simple and quite centered, a place to return to when subtleties begin to overwhelm. It helps me spot errors in thinking, such as when you claim to test before you code. The model, as drawn, is also iterative if you follow the links. And those links were always there. -- WaldenMathews

The waterfall approach assumes that requirements are stable and frozen across the project plan. However, this is usually not true in case of large projects where requirements may evolve across the development process. For a complete information on Waterfall model you can check out

There's a gag video piece up on YouTube on this subject:


See: SystemsDevelopmentLifeCycle

CategoryApplicationDevelopment


EditText of this page (last edited October 3, 2014) or FindPage with title or text search