The issue of which design approach to choose with regard to YouArentGonnaNeedIt often strikes me as being similar to investment decision math. It deals with probabilities, decision trees, costs versus benefits of each option, and FutureDiscounting. The simplest approach is not automatically the best course of action one will see if they look at some examples from this discipline.
Note that I don't expect each decision to undergo formal calculations, but one should be familiar with the techniques so that they become second-nature when making such decisions.
For example, lets say that change X is judged 70% likely to happen based on prior experience. The (additional) costs of building in preparations for X up-front is say 10 units. If X happens, then fully implementing X will cost 5 units if we did the preparations. However, if we did not prepare for X and X happens, then the cost would be 25 units.
Preparation for X:
Prepare:
0.7 x 15 + 0.3 x 10 = 13.5Don't prepare:
0.7 x 25 + 0.3 x 0 = 17.5Thus, it makes more sense to prepare in this case.
(Assume that the numbers are already adjusted for FutureDiscounting.)
--top
An interesting metaphor, but it is in danger of appearing empirical when it really is still guessing based on one's experience or prejudices (I updated my original comment because it didn't make sense when separated from the other part below.) -- BillCaputo
It is a thinking and communication tool more than a formal technique. One can use it to compare perceived estimates between people, for example. I find it easier to communicate when assumptions are turned into some kind of concrete value or estimated frequency. Examples are given in GoodMetricsUseNumbers. An imperfect technique for analyzing thoughts openly is often better than no technique. It may bring up previously-unstated assumptions, for example, even if the final numbers are not used.
Thus, it makes more sense to prepare in this case (assuming I calculated it right).
This assumption is the basis for the entire analysis. If I am new to (or have never done) ExtremeProgramming's TestFirst/SimpleDesign-style application development my informal "second nature" calculations are going to result in me inflating the FutureDiscounting savings and putting a more favorable "result" in the "do x now" category. If however, I have experience the cost savings of highly factored and non-speculative systems, and seen the relative ease that they can be changed when I RefactorMercilessly, practice OnceAndOnlyOnce, and have extensive suites of XpStyleUnitTests? then I am going to "calculate" a much higher cost for "do x now".
In short, this metaphor is going to mask what is really going on: if I haven't done XP I will say that Yagni is often inappropriate, and if I have done it, I will say that it is almost always appropriate. Thus experience and fear -- not mathematics -- is what I will be basing my decision on. -- BillCaputo
This sounds to me like, "it just works better in my experience but I cannot really say why". Not very satisfying. Besides, RefactorMercilessly can be expensive. Sure, your code may eventually get to the same point, but at a great cost of the constant shuffling of code. Further, it is hard to know if is hard-core refactoring or YAGNI that improves things. You have multiple variables but don't know which one gave you your better experience. I could also say that the techniques I use don't result in the high failure rates that have made many places look into XP. There may be more than one road to good software. Like I say, XpIsForBadPlanners.
Never mind that this is an unfair assessment of what Bill said - he mentions a lower CostOfChange brought about through practices such as RefactorMercilessly and OnceAndOnlyOnce. Hardly sounds like "I cannot really say why" to me. Also, one of the things that makes this Wiki what it is is that we believe in TheImportanceOfFirstHandExperience - and if two people have experience that differs we don't try to dismiss what the other person says. We try to gently tease out what accounts for the difference, without accusing anybody of being clueless. -- francis
The above analysis ignores the cost of guessing a feature you'll need in the future, implementing that feature, but guessing wrong about aspects of that feature. If you wait till you actually need that feature, this probably won't happen. It's easier to know what you need when you need it. -- francis
The better you are at guessing, the more use you can make of that guess (or ability to guess).
A guess is still a guess. Better is evidence.
If one's "guessing" has a good track record, then it may be a time saver by reducing the wheels you have to reinvent along the way.
Don't forget the aspect of repeatability. Yagni and refactoring are repeatable "processes". Guessing is not. If you always guess right you should be playing blackjack professionally instead of developing software - the effort reward ratio is better.
I don't think the Blackjack analogy is applicable. It is a matter of applying experience. For example, most business applications are probably going to have CrudScreens, reports, data backups, and query screens in the longer run. The initial requirements may only specify CrudScreens, but not the others. But one should probably design it with the ability to easily add the other 3 rather than pretend like YouArentGoingToNeedIt. This is using experience of both your past projects, and by observing other existing projects.
The blackjack analogy is applicable because its not prediction, but prediction without feedback that YAGNI is a warning against. We can't adjust our predictions (legally) in blackjack as the situation progresses. We can in software development, however most traditional design approaches don't advocate this (certainly not in hourly or daily increments like XP). YAGNI is a reminder to continually temper our intuition by using empirical evidence, not to ditch our intuition all together. Furthermore, the true problem in your counter-example is that the requirements don't match your experience. XP'rs don' throw out YAGNI to answer that problem, we work with the customer to improve requirements and encourage vertical slices of functionality. This is why OnsiteCustomer is an XP practice, why Communication is one of the XpValues, and why we focus on delivering WorkingSystems? (not prototypes or mockups) every week or so. -- BillCaputo
When you guess the future sometimes you are going to be wrong. One has to weigh the cost of being wrong versus the benefits of being right and the probabilities involved. This is all part of "decision math". I think it is dumb to assume we can grow everything organically from scratch and ignore past lessons because of the chance of loss from occasional bad guessing. Yagni is like "Market share is everything" type of fads. Use math, not mantra. Perhaps we can also factor in a distribution of "guessing error".
Unfortunately, your estimates of your chances of being wrong are also just guesses (e.g. 'occasional bad guessing' sounds way too optimistic based on my experience). Programmers tend to underestimate difficulty by large degrees, and it isn't because we're bad at estimating, it's because the things we're trying to estimate are too complex to be able to get a good guess in a reasonable amount of time.
XpIsForBadPlanners. I am not a "bad planner" for the most part. If somebody is crappy at semi-planned design, THEN make them do XP.
Personally, I think it is OOP in business modeling that makes so many projects fail and "hard to plan right" these days. OO is a road to hard-coded NavigationalDatabases and the evil that comes with them. -- A RelationalWeenie
If that were true, we should see a trend of more and more projects failing as OO became more popular in the last 10-15 years. We have not seen that. We've seen a steady trend of failing projects. It's not the OO part that's failing, it's the BDUF part. BDUF in any language is inappropriate for most competitive software.
I have not seen too many consistent failure measurements, just more chatter about failures. I just know that my projects generally don't fail and a given set of techniques that work when others use them (unless management sticks their fingers in the pie in a dumb way. Maybe it is dumb managers sticking their thumbs into things that ruins them all.)
As a side note, you can actually adjust your predictions in blackjack as the situation progresses. This is called counting cards. As the ratio of tens, aces, and face-cards to low cards increases, the odds for the player increase, so by noting the state of the deck and varying your bets accordingly you can turn blackjack into a game in which the odds are in your favor. This is legal. However, casinos also have the right to kick you out for any reason -- they're private places, after all -- so if the dealer or pitboss notices you doing so they'll be sure to force you to take money from the casino down the road. -- francis
Yes, that's what I meant by 'legally' as in "if you get caught there are penalties." As an AmericanCulturalAssumption, this bit of casino policy is well know to anyone who has seen the movie "Rain Man" (http://us.imdb.com/Details?0095953) :-) -- BillCaputo
How about this analogy. It used to be that streets were designed for horses and horse carriages, not cars and trucks. However, as cities and cars grew, many existing roads were found to be too narrow. Now when new towns are being designed, they create roads wider than initial projects require, or at least leave space for wider roads. If they followed YouArentGoingToNeedIt, then people would build structures to the edge of narrow roads. Later if/when things grow, you then get bottleneck intersections and no center turn lane that are commonly seen in poorly-planned cities and towns. It is easier to leave space than to later move all the structures and houses. I believe the calculations for such actions would be similar to the above numeric example.
Except that software development is nothing like building roads. Let's assume for a second that you could build roads the way you can program software: For example, if you use OnceAndOnlyOnce as XP suggests (see OaooBalancesYagni), then you would have a simple variable for the width of standard streets, and changing them from 3 meters to 4 meters would be as simple as changing the value of the variable. Clearly this kind of grand change is not possible in physical construction, which is why ProgrammingAintManufacturing (see also TheSourceCodeIsTheDesign). PrematureGeneralizationIsEvil in software.
[Simultaneous edit in response to How about this analogy...]:
YouArentGoingToNeedIt doesn't suggest that we guess that the roads can be narrow or wide it means we find out what the intended usage is, and build accordingly. Furthermore we aren't building roads, we are writing code. Roads are expensive to change. If code is expensive to change we don't do it. Hence many of the other principles and practices in XP (like RefactorMercilessly and OnceAndOnlyOnce) are aimed at reducing CostOfChange. Finally, even within the analogy YAGNI holds. The original road builders were right in not paying for roads that would hold cars (as much as it might be to our benefit if they had). Had they done so, they would have paid a cost in lost real estate and increased design and building cost, and yet still not saved us cost, because they would probably have guessed wrong (do you really believe the visionaries of the 17th and 18th centuries would have accurately predicted the issues faced by city arteries???).'' -- BillCaputo
Salt Lake City did it. (At least they claim that.) Besides, I am not talking about newbies, but people who have some experience with requirements changes.
I wonder if opinions are a range rather than discrete. Most would agree there is over-design and over-planning. Does XP refuse ANY anticipation-based design? Do they toss "A stitch in time saves nine"?
XP uses ContinuousDesign via ReFactoring and DesignPatterns.
IMO, XP calls for "A stitch in time" (when you do need it) rather than "A stitch ahead of time" (when you think you might need it). XPer please correct me if I got it wrong.
"For example, lets say that change X is judged 70% likely to happen based on prior experience. The (additional) costs of building in preparations for X up-front is say 10 units. "
IME, you need to add up the additional cost to all other units simply due to the presence of these preparations for X. By adding the preparations for X, you are adding complexity to the your system, and that may (and probably will) increase the cost of all other units across the whole system. The worst part of that is it is very difficult and costly to really estimate how much the added complexity is going to cost you.
In the end, you get a bunch of guesswork numbers (the costs) multiplied by another guess (the probabilities), so the error margin of the resulting number is too big to be useful. Take the simple example at the top, the difference of 2 results is only ~30%, where the error margins for most of the numbers used is probably >50%. I think part of the discussion above stems from this gigantic error margin of the numbers, i.e., you can get whatever result you want by fiddling the input because the inputs are guessworks themselves, you are just obfuscating your guesswork with a formula and pretend that to be a good reason.
What is the chain of multiplication that you are talking about?
This one: "Prepare:
0.7 x 15 + 0.3 x 10 = 13.5"The probabilities (0.7 and 0.3) are guesses, the efforts estimates (15, 10) are guess (probably more accurate ones), and the missing impact on the rest of the system will be guess. Effort estimate having error margin of 30% is already pretty good, how far off the probabilities can get? 50%-70% error margin? So the final number is more like 13 +/- 8 (the .5 precision above with no +/- margin is really quite insulting to anyone who understands how significant figures are counted).
If the "error of guessing" is more or less random, then they will tend to cancel each other out, not keep rolling up into a bigger error. I will agree that sometimes one can be way off, but that is a price worth paying if we come close most of the time.
Errors that are added tend to get cancelled out, but only if they are measuring the same thing, and only if they are random errors. This is why averages work. Errors that are multiplied get bigger and bigger, especially if they are measuring different complex things, or if they are systematically wrong (e.g. programmers almost always underestimate, not overestimate).
E.g. Let's say the person's guesses are +- 20%, which seems reasonable. In fact, we'll only say that the probability guess is +- 0.1. In this case, we could see a range of cost differences from 12.8 (better to prepare) to -3.6 (better not to prepare). [P == cost of preparation; I == cost of implementation with preparation; H == cost of implementation without preparation]
Scenario 1: Perfect guesses X ~X P I H 0.7 0.3 10 5 25 Prep: 13.5 ~Prep: 17.5 Diff: 4 Scenario 2: Systematic error overestimating cost of preparation X ~X P I H 0.8 0.2 8 4 30 Prep: 11.2 ~Prep: 24 Diff: 12.8 Scenario 3: Systematic error underestimating cost of preparation X ~X P I H 0.6 0.4 12 6 20 Prep: 15.6 ~Prep: 12 Diff: -3.6My belief, based on experience and just about every book I've read on software development, is that Scenario 3 happens FAR more often than 1 or 2, or even a scenario which has random errors. Now, as you say, you personally may be able to get much more accurate estimations for your own projects. All I'm saying is that that isn't the typical case in the software development field.
IMO, people under-estimate cost more for political reasons than because of bad estimating ability. I have entered into suspicious projects for political reasons many times before, I must admit. I wish it were not that way, but if you want the prizes, you have to play the game, even if it's a silly game. One often doesn't have control over the aspects that will "fix" the problem, so one must either ride a doomed train or watch somebody else ride; and sometimes it is better to be on it despite the outcome. At least they pay you until it crashes. Just have your CYA folder ready if and when the need comes.
Put another way, the decision math for your career choices may differ from the curve for a given project.
Contractors "getting their foot in the door" is another "reason" to low-ball estimates. Contracting agencies often take an up-front loss so that they can be paid for "add-on" services and extra changes. "Sales math" and honest project resource estimations may also have different curves because they are computing under different goals.
The immediate benefit of trying to put numbers to something like this are fairly small. Most of the time, the initial data will have large errors, making the supposed conclusions all but useless. If you only run the calculations a couple of times, chances are good that you'll end up doing the same thing as if you hadn't.
The real gains come from two things:
1) By putting specific numbers to something, you are making hard, testable predictions. By comparing the eventual outcome with your predictions (as far as you can) you should improve your ability to make accurate predictions.
2) By doing slightly different calculations, you can work out what the decision thresholds are, giving you a much better basis on which to make judgement calls - if it turns out that changes where you commit x unit of effort up front to save y units later only need a 10% chance of being needed to break-even, then it's worth leaving even fairly unlikely things open when you can do so at that level; if turns out it needs a 90% chance of being needed to break-even, then in some projects it's not worth providing even for things that are some way down the official features list. It's not about getting the right decision for your current project; it's about improving your understanding of how to make the right choice in future...
It's mostly a communications device, I agree. It puts a concrete model based on notions in people's heads such that there is something concrete to talk about and tune.
See also: SimulationOfTheFuture