From YagniIsBologna.
YouArentGonnaNeedIt (YAGNI) and DoTheSimplestThingThatCouldPossiblyWork (DTSTTCPW) are not invitations to stupidity. If you have three reports to do, you do not do each one from scratch with no concern for the preceding work. That would be silly. Instead you do one, start the next, see what's usable, refactor. See XpSimplicityRules for more on how.
But the original poster [on YagniIsBologna] suggests writing code because you "know" you're going to need it. The idea is to spend more now to save in the future. What is necessary in order for that to work?
Well, first, doing the thing in the future needs to cost more than it does to do it now. If it doesn't, then doing it in the future saves time/money now at no cost in the future. XPers suggest that it does not cost more to make the same change to well-factored code, no matter when you do it. We even suggest that sometimes we are smarter in the future than we are in the present, and things go more easily.
Second, there needs to be no better way to spend the time/money now. XPers suggest that a better way to spend the time/money now is to deliver actual business value to the customer, that is, to deliver new features. This is an especially good idea if you believe the point above, that doing the generalizing later will cost no more than generalizing in anticipation of need.
So the way you do YAGNI is this: Implement exactly what you need now, no more, and no less. Make the code as perfect as you can: refactor, removing duplication, making the code communicate. That way, when you need the next thing, whether it is a generalization or not, it will go in smoothly.
If you generalize first, costs look like this:
Time1: F1 + G (cost of feature 1 + cost of generality) Time2: F2(cost of feature 2, really good deal) Total: G + F1 + F2If you generalize at the time of doing feature 2, it looks like this:
Time1: F1 Time2: G + F2 Total: G + F1 + F2There's not much difference in the cost ... but the cost of G is deferred in the second case. You actually save with YAGNI!
But the actual cost is:
No generalization: Generalization: Time1: F1 Time1: GF1 Time2: GF1 + T(-F1) + F2 vs. Time2: F2 Total: GF1 + T(-F1) + F2 Total: GF1 + F2
XPers over-simplify the Equation
XPers often apply 'YagNi' to the features of 'generality' and such (in all its various forms, including portability and modularity). In reality, you often can avoid 'generality' features now (that general solution? YagNi!) but you need some sort of bridge or partial solution immediately. XPers claim: Do the general thing in the future must cost more than Do the general thing now. But, supposing you actually end up needing the general solution, the costs look more like:
The reason XP is often a good decision often comes down to another part of the equation: "Get the general thing correct in the future" is often much cheaper than "Get the general thing correct now" because you'll have a clearer notion of how the 'thing' needs to be generalized, and so you won't waste as much time on research. Of course, this isn't necessarily true... if you have already executed PlanToThrowOneAway, or you have already implemented similar projects, or you have simply learned from similar projects and their successes and failures, you might know exactly what you need.
It is worth noting that the costs of maintaining an interim solution and later undoing it (switching over to a general solution) can vary wildly based on the problem domain, issues of code ownership, number of 'customers', etc. E.g. for languages, libraries, and programs that interface with other programs (rather than humans, which tend to adapt more readily to change), those costs can be very high depending on the number of users and number of other code components outside your control that end up communicating with your own.
And, of course, YagNi wins when it turns out your project really doesn't need the generality. XPers can reasonably argue that the benefits of risk reduction from not gambling their time on 'generality' for features they might not need in several places is reasonable to offset the cost differential of doing the general thing in the future rather than now. Supposing you don't need generality, the costs look more like:
The counterargument is that you are often stupider in the future than you are now. Now, you have been working on the problem for days. All the code is in your head. You know just how to make any change. It will take minutes or hours to make the change. In a week or two you will forget it, and will have to learn it all over again. It will take days to make the change. So, it is better to do it now, when you are smarter and the job is easier. CommunalCode? strives to eliminate this.
In practice, we compromise. We batch tasks that are similar so that once we load our brain for the first, we can breeze through the others. We tell the customer that if we do this story, these others will be cheaper, and so he or she puts them together. That way we can write them one at a time and refactor and not pay the price for getting stupider in the future. But it is a fact that we are just as likely to be stupider in the future as it is for us to be smarter. -- RalphJohnson
It is common to confuse stupid with forgotten the details. I would suggest that smarter be replaced with wiser. In the future, I may not remember how I did something in the past, but I will usually have a better grasp of the approach or the process, which will alter the how. If I've been paying attention during that span of time, then I should be wiser about my choice things to employ at all. If I'm not wiser, it might be time to change careers. -- GarryHamilton
It seems YagNi in general does not give much worth to experience. It does not seem to give credence to the idea that understanding the domain helps one plan. In this view, the weighting factor on experience-oriented decision paths will be low in YagNi-viewpoint decision calculations.
The fallacy here is the assumption that G has the same cost in both of the above examples - In practice it plainly doesn't in the majority of cases (no matter how many black-belt re-factorers you have on your team). In my experience some good thoughtful design results in the cost of G in F1 + G is more often than not considerably lower than the cost of G in G + F2.
Of course there are exceptions, and of course good design of F1 will go some way towards making it easier to generalise F1 at a later date, but... I'm still convinced that there are going to be many cases were generalising in advanced of a known future requirement makes better sense than obstinate refusal to do so in the name of Yagni. -- GaryBarnett
But 'good thoughtful design' takes time, right? That's part of G in either case, it's not free in the F1 + G case. Maybe you're thinking that it goes in the F1 term. I think that's what Yagni and TheSimplestThing are really challenging. Maybe you're separating good design and generalization into two things that really fall under the same design hat.
Actually, the fallacy here is the assumption that any of the variables will have the same value in both examples. In the first case, you'll most likely do G first, so F1 and F2 will be about the same. G(1), however may be much bigger than G(2), while F1(2) will almost certainly be bigger than F1(1). F2 can be assumed to be the same in both cases, though. The correct representation would be:
Time1(1) = G(1) + F1(1) Time2(1) = F2 Total = G(1) + F1(1) + F2 Time1(2) = F1(2) Time2(2) = G(2) + F2 Total = F1(2) + G(2) + F2Since F1(2) is very likely to be smaller than G(1) + F1(1), if you indeed do not need F2, you've won. If you do need it, most YagniAdvocates? argue that F1(2) + G(2) is most likely to be smaller than F1(1) + G.
The equation seems to miss one important factor in delivering a feature to a customer: optimization. When developing code, the usual concept is "Don't Optimize (yet)". But when you deliver a feature, you need to optimize.
If we include this then we might find:
Time1: F1 + G Time2: F2 + O Total: G + F1 + F2 + OIf you generalize at the time of doing feature 2, releasing F1 early, it looks like this:
Time1: F1 + O1 Time2: G + F2 + O2 Total: F1 + F2 + G + O1 + O2This is probably a bigger value. Not least, because the cost of generalizing optimized code is usually higher than for non-optimized code.
-- DaveWhipp
But when you deliver a feature, you need to optimize. - Not even then, and certainly not for every feature delivered. Who cares if the consistency checks in "add a new product to the catalog" take five seconds to run, if the operation is only performed once a month on average ?
True, you can find extremes at each end. Some features need optimization; others don't. But the point remains that post-optimization generalization presents problems (if only for the english language). If we assume that the customer doesn't allow you to say "that feature's difficult, we'll leave it till last" then, if you obey YAGNI, you will need to optimize before you have considered the implications of all the features. --DaveWhipp
Is there a class of programming tasks to which YAGNI is more readily applied? I think so. Although I very much agree with the basic premise of YAGNI, DTSTTCPW and the XP programming rules in general, I notice two things:
a) Webby business database kind of stuff seems easier easier to do in an XP fashion (based on my limited exposure to these types of apps). The counter-examples I run into every day are engineering problems, in my case machine vision applications. If I had made decisions based on what would have gotten me to a working solution quickly I would have regretted it almost every time. I instead thought about what my experience told me my users (process engineers, technicians, machine operators) would probably ask me for but couldn't imagine or verbalize then. I picked my algorithm based on my own future predictions back then and didn't regret it in most cases.
I feel this is because the bulk of my code is completely independent of GUI, output, and all input except the images taken by the camera. As a result changes are usually not due to changing interfacing requirements or data formats but strictly due to changes in the content of the input data itself. If I followed the conventional wisdom that the cost of future software changes is low because the changes are very localized I would have made a bad decision by going with a simple algorithm instead of the one that takes a little longer to implement but gives me a better chance to tackle what I just know will eventually become necessary. In reality the real cost of my changes gets higher as I go along because a poor choice of fundamental image processing algorithms usually results in replacing reams of highly specialized code. In my area it's almost like YAGNI et al. works across functions, classes and interfaces but not within functions themselves (which is where 95% of the interesting stuff happens).
b) You need to have experienced the ill effects of premature generalization to feel when the time for XP decisions is right. It seems extremely hard to learn these rules by positive example but quite easy by negative experience.
Andrew, do you think it would be possible to contribute a specific example of type a) that you yourself have experienced? Specific examples will help to show where we need to refine things.
OK, here's a real example (I don't think I can post actual code):
- The customer asks me to inspect an adhesive pattern for gaps. The pattern looks like a figure 8 so I could just look for a closed outside contour with two closed contours inside. I ask: do you also want the number, size and locations of the gaps? They say no. I ask: do you also want to find thin spots that are almost gaps? They say no. Now I could go and implement the simplest thing which would be to use the blob finder of my vision library and ask it to look for one light blob and then to look for two dark blobs inside the light blob. This would take about a day and a half.
- I think about it and look at some similar inspections I've done and I come to the conclusion that I should probably spend 4 days on a routine that finds the thickness of the adhesive bead along the whole figure 8. Now I can find thin and thick zones in addition to gaps. I can do all this without making many changes in other parts of the vision application. But I have made an implementation decision that isn't YAGNI or DTSTTCPW and quite a bit of code is being generated based on that decision.
- Looking back I made the right decision because now we are looking for thick and thin zones on some new products we're inspecting. If I had gone down the original path I would have had to scrap almost all my original code because it was based on an entirely different approach. A blob based routine is quite different from the edge detection/profiling approach I used.
I realize, though, that the type of design decision I'm describing here is not quite what the XP rules were meant for. I went back to the YAGNI page and looked for some kind of scoping hint to apply YAGNI. In my area design decisions inside a class implementation often spawn lots and lots of private helper functions. How do I apply YAGNI here? Does YAGNI penetrate into the implementation or does it only define the interface? --AndrewQueisser
As a quick illustration of what I mean: On first reading, it sounds like the 'reams of highly specialized code' are not well-factored, and that perhaps if the code had been built up piece by piece and refactored along the way, it wouldn't require as much work. But then, maybe it is well-factored, but resistant, as you say, to YAGNI. Without a specific example, it's difficult to make any analysis.
I think what I'm getting at is that at some point it doesn't matter whether the code was well-factored or not. If you have to swap out a whole tree of code because your initial assumption was overly optimistic you haven't won (although as a good programmer you write well-factored code anyway). So, is there some level of user-story penetration or user-story coverage necessary to prevent these situations? In my example I have one function, a camera image as input and one of a handful of failcodes as a result. Everything in between is pretty much up to me. At this point it's hard to make those YAGNI decisions --AndrewQueisser
Something extra [in support of YAGNI]: given any chunk of code, the deeper the cruft and complexity the more careful you have to be not to disturb something that works. When some part of that complexity is YAGNI code (i.e. code which is currently superfluous and unnecessary, and so (by YAGNI) should have been left out in the first place) then efforts to work around the peculiarities of that code is a waste of time.
In terms of the formulas above: F2 is actually F2(x), where x = the work to date. Thus F2(F1) is cheaper and simpler than F2(F1+G). Factor in that G may not be concretely defined being as yet un-needed, and you've got a real mess.
ReFactoring is a supporting process of Yagni. I don't think we'll be able to capture the full EconomicsOfYagni with two simple equations. I think it was intended as an illustrative example to counter one specific critique of Yagni. This page has since sprouted some slightly different critiques, and the equation is too simple to handle them all. For instance, the G part of the equation is not done in a traditional way, but through ReFactoring supported by UnitTests. This is not obvious in the equation, and so it's easy to come up with trivial counter examples that ignore this hidden assumption. More specifically, "When some part of that complexity is YAGNI code then efforts to work around the peculiarities of that code is a waste of time" makes no sense to me because it is the exact opposite of my experience with Yagni supported by ReFactoring. Hence the comment. Yagni makes the change easier, not harder, but only if you're using ReFactoring.
Yagni's also supported by DoTheSimplestThingThatCouldPossiblyWork. Sure, it would be a pain to replace one crufty, complicated, large piece of code with another complicated, large piece of code. But that just signals to me that the first piece was definitely not the SimplestThing.
Yagni's also supported by UnitTests. If you've already got the tests working, then you've got the basic interface working, so the goal is really to get the tests working again, but with a new algorithm. You don't have to write nearly as many tests. That reduces the cost of G compared to traditional CodeAndFix methods. It also drastically reduces the cost of F2 since we already know that F2 is similar to F1, its tests are likely to be similar as well, so reuse is going to be high.
When I wrote When some part of that complexity is YAGNI code then efforts to work around the peculiarities of that code is a waste of time, what I meant by "YAGNI code" was code which was currently superfluous and unnecessary, and so (by YAGNI) should have been left out in the first place.
[Here's an example of where not following YAGNI hurts:] I've just finished a bunch of refactoring of code, changing all instances of a boolean parameter (required due to F2 but otherwise unused) to a string parameter, broke a bunch of UnitTests in the process, got them fixed again, and all is sweet.... but today I was informed that the whole F2 question is back up in the air, that that string parameter (to support F2) might need to become a list instead, or it might actually be discarded completely. At the end of the day I would have been of more use and value to the company if I went down to accounts and helped them with the end of month filing instead. Right now, everytime I need to write a new call to F1 I have to include a parameter to support F2, and I have to make a best effort guess as to what value to pass in to that parameter in anticipation of the as yet not concretely defined F2.
YAGNI would have me not having that parameter at all, or (compromise) not putting in time guessing what value to pass ... and then once F2 goes official to then do one large sweep of ReFactoring to get it implemented everywhere correctly.
Thus, my point (of supporting YAGNI) is that adding F2 prematurely adds to the complexity of utilising and refactoring F1, and the constant rearrangement of the deck chairs isn't delivering business value.
Wow, that's completely different from what I thought you meant, and still I don't think I understand your point of view. Not to worry. I think my main perspective on this issue is that YAGNI could and does mean different things at different times (it's an overloaded term), and also has different consequences depending on the other things we are doing (to support or aggravate). Working out these subtleties and making them more explicit will be necessary before we can come up with a good theory of the EconomicsOfYagni.
One thing to factor into the cost equation is management and contracts.
Many contracts stipulate when new features are acceptable in a release. In this environment you can't slip a refactoring in, but often you can slip in an addition to a stable framework.
The cost can also be extraordinary when management will not let you fix/refactor something because of the resources and risk involved. One early blunder can cost forever because they will never let you fix it. -- AnonymousDonor
There's a cost to understanding. Some people understand better by coding. Some people understand better by thinking. People who are good at the latter don't understand what the former are talking about when they say frameworks (e.g.) don't work as you can't predict the future, etc. A lot of us can actually code in our head so the number of failures is quite small. In this case the cost is the same, just a different way of spending it. -- AnonymousDonor
yes but can you actually run UnitTests in your head?
This to me gets at the heart of the mental leap from BigDesignUpFront mentality to XP. I was a thinker, and now by XP discipline I'm a do-er (and love it). I still use all that architect cognitive ability I developed, (though the average number of GoF patterns I use per day may have gone down a bit as I'm no longer overbuilding or GoldPlating).
Doing XP means delaying gratification a bit in the thinking respect. But then you get gratification on a much more regular basis (rather than when the prototype is done). Those who love BigDesignUpFront and are excellenet architects will recognize more and higher quality refactoring possibilties. Plus you will never have lingering doubts that what you're building may just end up not working or missing the point - I'm a green bar addict now and will never go back.
How does YAGNI fit commercial development?
What's to say that the competition won't overtake you because they did speculate? What if they invest time and money in new ideas, technology and products based on this speculation and come out on top?
Your customer then becomes very interested in these new products, because those that you are trying to sell are OK for yesterday and today, but not for tomorrow.
Software development does not only provide what is needed today. It can also define what is needed tomorrow.
By doing YAGNI you deny yourself the kind of success that open speculation and alternative knowledge sources (i.e. other than a customer's idea of their current needs) can provide. -- AnonymousDonor #1
I believe you are mistaking bad marketing for YAGNI. YAGNI says don't put in code for features your XP Customer doesn't ask you for. You seem to be talking about the value of your XP Customer actually requesting features because of speculation that they will have high business value. That's fine. The Customer decides based on business value vs. cost. When Programmers add features that weren't asked for, they are usurping what XP considers to be the Customer's decision. -- AnonymousDonor #2
It's okay for the customer to speculate about the market, but it's bad to have the programmer speculate about future customer requirements when he's coding. If you have a programmer who's particularly good at speculating about the market, then let him play the customer role, but as with any customer, make him define requirements in a prioritized, incremental fashion. When he puts his programmer hat back on, ask him to focus on the immediate requirements so that the project moves quickly to the next planning stage. -- SteveHowell
YAGNI is an AntiPattern if the CostOfChange grows exponentially like it is purported to do in most software engineering text books. YAGNI is a pattern if the cost of change is sub-linear as it is purported to be by XP (due to other supporting practices). This HiddenAssumption (the growth of the CostOfChangeCurve) is most probably the real source of argument between pro- and anti-YAGNI advocates.
Here are some YAGNI insights based on treating the decision to implement a feature with uncertain benefits as a delay option.
JohnFavaro and HakanErdogmus evaluated the YAGNI option value using standard option pricing techniques, comparing the delay option value to the value of immediate implementation. They varied the level of uncertainty and the waiting time under different types of CostOfChange functions. The findings support the hypothesis that YAGNI is an AntiPattern under exponential CostOfChange and a pattern if CostOfChange is constant. If CostOfChange is monotonically increasing, but concave with a diminishing rate of increase, then the feasibility of YAGNI depends on the level of uncertainty and the time horizon (time to the implementation decision). They assumed the uncertainty of the benefit of a feature increases with time.
"depends on the level of uncertainty" in the paragraph above should be highlighted in Bold. After a few years of working for management who has become MicrosoftSlaves, I begin to appreciate that uncertainty can be utilized effectively by dominant market leaders to create more uncertainty, for purposes of inducing companies to scrap existing infrastructure investments.
Another example that supports YouArentGonnaNeedIt can be taken from the huge Y2K investments made a few years ago that made obsolete a lot of good work done in years past. At this moment IT-Security concerns can take over as the next Y2K FUD mechanism. --dl
Isn't YAGNI just another name for the JIT theory of manufacturing? If we look at software development as a manufacturing process, we can apply some of the APICS body of knowledge to the process. Inventory isn't an asset, it is a liability.... There is a carrying cost for safety stock, usually estimated to be 20% annually... Excess code "in case we might need it" costs us money in maintenance, keeping track of it, worrying about it, and obsolescence (probably has been said before and more eloquently) -- Brenda Metcalfe
Interesting comment -- see KanbanSystems? and LeanProgramming.
The only problem with that is that the analogy between software and manufacturing is deeply flawed, as is explored in ProgrammingAintManufacturing.
Moved from YouArentGonnaNeedIt
Isn't it important to deal with future requirements when they come up?
Help me with the economics. What would have to be true to make anticipated requirements worth implementing before real requirements? Who is funding the development of these anticipated requirements?
YouArentGonnaNeedIt warns against working on something for which you do not have an immediate requirement -- "immediate requirement" meaning something not required to meet the delivery requirements for the current iteration. Perhaps in the future the user will ask for this anticipated requirement. But right now ... no.
The funding is by investors, as always. Sometimes you have to invest in your future and take a longer-term view.
There are other mechanisms in XP that helps the project to take the long view. The project is being managed, in WorstThingsFirst fashion, by a series of UserStories that are scheduled into iterations. The CommitmentSchedule (for the overall project) and the IterationSchedule? (for this iteration) should ensure that requirements are in an order that makes sense.
YouArentGonnaNeedIt is more focused than that: It warns against the temptations we all get, as developers, to extend our code beyond the immediate need as determined by the customer.
The discussion stresses the need not to implement what's not necessary yet. But the cost of not thinking ahead tends to be much larger than the cost of coding ahead. Often there are several simple things possible rather than just one "simplest". For example: working towards a solution or working back from a solution. Each approach might have a huge impact on the changability of design and code but not on the immediate cost of implementation. Often I see DoTheSimplestThingThatCouldPossiblyWork practised as DoTheFirstThingYouCanThinkOf, without two thoughts about the consequences. Maybe it's not the BigDesignUpFront what we need, but where's the balance with ContinuousDesign?
YAGNI is a defense against pretty much every programmer’s (including my own) desire to find general solutions, even when we are being paid to do something specific. We often fail to notice that the cost of making something flexible enough to meet future requirements now is about the same as it would cost to add the same flexibility later. We vastly underestimate what a good programmer can do with a codebase given a free afternoon and a good refactoring-aware IDE. -- http://fishbowl.pastiche.org/2004/03/02/defending_yagni
Related: YagniAndGenerality, RefactorLowHangingFruit, DecisionMathAndYagni, FutureDiscounting