Requirements Tracking

I have used TurnObjectionsIntoScenarios, resulting in ExtremeProgrammingChallengeNine. -- KentBeck

I think I get, from the term, the idea of what RequirementsTracking is. It sounds like you keep track of changes to the requirements, who made the change, why they made it, when, stuff like that. If that's wrong, correct me now.

If that's what RequirementsTracking is, I don't see the benefit. Please tell me a story where the moral is, "And that's why I am ever so happy that I tracked requirements changes".

-- RonJeffries, with assistance from KentBeck

Note Ron does not knock "change when the requirements change". He knocks "write extra paperwork explaining how and why you change". Any effort into the latter takes time from doing the former right, including aggressive RegressionTesting. -- PhlIp, with assistance from PhlIp


You've probably seen it by another name, Ron. BlameTracking. Not that I'm saying it's useful. -- AnthonyLander

After this page pans out, I'll hope for an understanding of why BlameTracking would be useful ;->


Actually, what Ron and Kent describe above is usually considered part of ChangeTracking?. Most of the questions like: who made the change, why they made it, when, etc., can be invisibly captured as part of checkout/checkin of the software artifact in the repository (I say "artifact" because it might be something besides source-code, like requirements ;-).

What usually is meant by the term RequirementsTracking refers to tracking of the requirements themselves (their status more than their changes) through every aspect of the project. It requires a record of traceable linkage for each requirement to/thru every other software artifact that affects or impacts it (and in the other direction as well). That means that someone is demanding that each class/method be "traceable" to its corresponding elements in the design, each of which is "traceable" to its corresponding requirement. This can be quite painful to do (and to maintain).

Common (and delightfully excruciating ;-) practice is to give a unique id (e.g., 1.2.3.4) to each section in "the requirements document." These form the rows in what is called a "traceability matrix." The columns of the matrix correspond to the other types of software artifacts it will be traced to: test plans and test cases, design & interface specs, source code, ad nauseum. Many try to manually update a spreadsheet with the names/versions of each related artifact for the corresponding requirement in each column (ouch - that hurts). There are tools to help do most of this, including ones that try to cull the requirements (and assign IDs) from natural text (often looking for the words "shall" and "should"). Some of the better RequirementsTracking tools are DOORS (from QMS Software if I remember correctly [Telelogic bought QSS and DOORS]), and Rational's RequisitePro. One typically generates/maintains traceability matrices only when it is being explicitly mandated from above (often by the customers or contractors, but sometimes by formal SQA departments and/or "high ceremony" shops).

Anyway, that's what is usually meant by RequirementsTracking (I told you it was painful ;-) -- BradAppleton

I'm after WHY, not WHAT. Please, I beg you, tell me a story whose moral is "And that's why I am ever so happy that I tracked requirements".

"If that's wrong, correct me now." was your phrase. You have to get the what before you get to the why.


Because the FDA came in and saw what I did. Then they went out and ate lunch instead of me. And that's why I am ever so happy that I tracked requirements.


The "WHY" is because the customer/queen decreed it. If your customers don't, that's fine (and much less painful ;-). Lots of customers do however, particularly for large contracts, or contractor-subcontractor agreements, or when your customer is not the end-user, but is pulling your stuff into something larger (possibly including lots of hardware) before they supply it en masse to their users. It's also strictly required many times when the overall product (of which your developed software is only one part) must pass some mandatory standards or audits.

For example, if you are writing software that is part of a real-time embedded system for anti-lock braking (and/or air-bag activation) and the car must pass both federal and local government mandated safety-tests. Some portion of the rigor and formality gets passed on to the software developers, and sometimes the contractor will make even more stringent demands of the subcontractors, making them jump through more hoops, just to be "safe".

In these business domains, you perform RequirementsTracking simply because you aren't given a choice; its not optional. You don't win the contract-bid unless you contractually agree to perform this function, and then later you don't get final payment (or else the delivery is not accepted) until you produce the information verifying that you have done it, and that every requirement has been delivered, and is been traced from concept through delivery. For such scenarios, you cant even begin to attempt to bid for the contract (which might be as large as 6-8 figures or more) unless you're willing to subject yourself to the pain of RequirementsTracking.

In other business domains, you can do it, but then some regulating body (like the FDA, in the example someone gave above) is allowed to come in and audit you, and then shut you down if you cant show them "objective evidence" of your RequirementsTracking (as well as evidence of a host of other things too ;-). -- BradAppleton


So, are they trying to be sure that the features they asked for are there? Would AcceptanceTests serve to show that? Or are they trying to be sure that something else happened?

They are not only trying to ensure the features are present, but that they were given explicit attention at each development phase/cycle, and audit phase between the time they give you the bid to when you give them the product. Keep in mind some of these shops are ones that are required to be ISO-XXXX compliant (or some other kind of standard, possibly multiple standards) so it may also be a case of "passing the buck" in terms of obligatory demands on the degree of ceremony they are required to fulfill themselves. Also (and I neglected to mention this before), it's common to see traceability matrices as required deliverable when the product is in an intermediate stage. So requirements that aren't implemented yet (and hence can't pass an acceptance test) would still need to be traced through to their current state. -- BradAppleton

Judging from my meager experience with a summer job at a company in the avionics business...

It's not so much proving that the features are present as proving that nothing but the features are present. When you're making software that controls planes, pacemakers, cars, or other safety-critical equipment, you can't afford any bugs. Thus you want to limit the possible points of failure in the system as much as possible, and ideally prove the correctness of most of it. Fewer moving parts = less chance of failure.

The traceability matrices ensure that all the code in the system is there for a reason. If there's a line of source code that can't be tracked back through design and architecture to requirements, there's a full design review to see whether the requirements need to be changed or whether the line should be removed. The system stays as lean as possible, the programmers understand what every single line is supposed to be doing, and everything can be tested or proven correct.

Obviously, it's a lot of work, but some application domains demand that much work. If I'm flying on a plane, I'd much rather know the programmers understood every bit of what their program was doing than hear that "It passed all the tests." For typical business software, YouArentGoingToNeedIt, but there's a lot of software that goes into embedded controllers for mission-critical applications. -- JonathanTang


Yes. In these situations, the quality of the product is not quite as important as demonstrating that you've been absolutely bureaucratic about how you got there. The process is the show. Due diligence. Reason doesn't even enter into this sort of thing. I don't know if GradyBooch was the one who introduced the word 'ceremony' to describe this, but it was an apt choice.

Alas, such is the world of contract disputes and government contracts and legal liability reason actually doesn't enter into the equation, but it isn't aligned with the values many of us would choose). Also keep in mind that none of this is necessarily the case with ChangeTracking?, and shouldn't be lumped together with it. Tracing requirements, and tracking changes are very different beasts.


If you were building a pacemaker control system, you might like to know what the thing was supposed to do before you assigned someone to sit down and write AcceptanceTests. If someone changed the what it was supposed do, you might like to know who it was, and how they got around to disagreeing with the previous attempt to say it. Once your requirements are written down, and the people in the know agree that that is what they think it is supposed to do, then you can talk about whether your acceptance tests accurately capture the test of what it is supposed to do. And then you can test it, etc. -- AlistairCockburn

The ExtremeProcess says that changes to scope (ie. what is to be done) can be made only by the customer. What XP doesn't mention, though, is that the customer may perform that responsibility using as little or as much ceremony as desired/required. So long as the customer process doesn't interfere with the negotiations that generate the CommitmentSchedule, does XP care?

In XP, requirements are described by UserStories. UserStories lead to UnitTests. Usually there is not a strong link between them, because UserStories are on paper (i.e. cards) and UnitTests are in the computer, but it is easy to imagine a system in which UserStories were on a computer and there were links between the two. The final part of requirements tracking is linking tests to code. You can find out what code is required for each test case by running it and tracing the code that is executing. You can find out what test cases lead to a particular piece of code by deleting it and running the tests. So, with a little extra work, you can have complete RequirementsTracking in XP.

It should be easy to write code to produce a report that listed the requirements that each method helped satisfy, and the methods that helped satisfy each requirement. It would take a bit of time to run, but would be easy to write. -- RalphJohnson


The purpose of requirements tracking is so when requirements change you know which code can be removed. It answers the following question for each artifact (object, method, test case, data dictionary term):

"Why do we have this?"

The theory is that dependency chains can get very convoluted; sometimes a small change in the requirements can have drastic effects on what needs to be implemented. -- IanRae


In real life a working system can contain things that are no longer needed. An example is the human appendix that was once useful but is now more of a risk factor. In the British army manual in the first part of this century they found that there was always a soldier standing by doing nothing when a gun was to be fired. No reason for this. The manual merely said what people had to do. No statement as to what requirement was satisfied by a soldier standing still doing nothing. They worked back thru early manuals... and found he was there to hold the officer's horse and stop it bolting.

In software terms requirements tracking lets you find out why something was there. A method may have a call elsewhere in the source code, but the call may never be executed. How do you know you can remove it?

-- DickBotting

See DeadCodeAnalysis?


I am hearing two different kinds of thinks described here. One is RequirementsManagement, and the other is RequirementsTraceability?. Both have their values under certain circumstances.

RequirementsManagement is there to make sure that requirements don't change without authorization, and without a corresponding change to the CommitmentSchedule. On projects where I have used it, I was glad because it stopped customers from demanding changes without giving us more time to implement them, and from claiming that the requirements that I was working to were not those to which they had previously agreed (both behaviors I have seen when I did not use RequirementsManagement). I would call any process which had this effect the same thing. How does XP handle this? What happens if a customer thinks that you lost a UserStory card? If he borrows one and changes it without telling you? Thinks that you modified one without telling him?

RequirementsTraceability? is there to make sure that you know when the agreed-upon requirements have been completed. If you tag all requirements by release, and write AcceptanceTests for all of your scheduled requirements, you are really doing this process. If some requirements cannot be tested via AcceptanceTests, you may have to use other techniques. In the very formalized approaches, you may make a matrix of which requirements are to be verified by testing, and which by other means.

-- RussellGold


I keep thinking that this comes down to confidence and scaling.

When the customer base is large enough that different customers have differing versions of the stories, you can have various problems. The customers need to know that their concerns are being met, and that another customer's desires won't override or overwrite theirs. If there is a disagreement, all parties need to know who has to be involved in the resolution. All parties need to recognize recurring back-and-forth story conflicts. Customers need confidence that the system that is built meets their needs. Developers need confidence that they are building the right system.

Scaling comes in with the way that XP is largely visioned for smallish teams. XP assumes that either the customers are largely self-consistent, or that they can be regularly locked into a room together to work out their differences, and that the PlanningGame is the way those differences get resolved. With larger, or distributed, or particularly conflicted, organizations, it becomes harder to do this successfully. Perhaps that means that XP is not appropriate, or perhaps it means some adaptations are in order.

RequirementsTracking is, I think, a non-XP response to these problems. It is, as many of the above comments say, a CYA response. In its defense, I'd say that most methods aimed at providing confidence are a kind of CYA.

I'm thinking that there's a better solution, though I'm not sure about all the details. SteveForgey? talks about ExtremeRequirements? as an approach in some of these situations. As I understand it, XR applies extreme principles to a requirements process that, for any of a variety of reasons, exceeds the scope of XP. I believe this would include tests for requirements, which get written before the requirements, and which get applied whenever requirements change. You could argue that the accumulation of such requirements tests is your RequirementsTracking, as well as an excellent way to detect conflicts.

-- RaySchnitzler


I appear to be missing the point here ... Requirements and Change tracking I thought are about understanding change; its necessity and impact. I cannot see any direct relationship between how quickly I can churn out code and how quickly I can respond to (or disregard needless) change - after all in cases where you are looking at a product which has a 5 year plus life span we are talking about rapid response throughout its life time not just during the initial development - for example you want to avoid the cyclic change syndrome which cannot be done without retaining historical knowledge.

So how does "Requirements and Change tracking" help understand change? What is "cyclic change syndrome" and why should it be avoided?


The essential goal of requirements tracking is to make sure that the product code realizes the requirements, no less and, in particular, no more. Acceptance tests focus on the "no less"; requirements tracks focus on the "no more".

Acceptance tests test what you expect. Programmers' pet rabbits and abandoned courses of development that find their way into the code create behaviours you don't expect and, per AdamsLaw, come back to haunt you later when it is least convenient.

If you want to pretend to be an engineer, requirements tracking is a convincing engineering discipline. If XP can't handle this, XP needs to be fixed.

--MarcThibault


CategoryRequirements


EditText of this page (last edited November 9, 2005) or FindPage with title or text search