YAGNI is just one part of the XP process. The XP rules cannot be adopted willy-nilly: they interact and mutually support each other. Rules supporting YAGNI include:
Of these, it is the ability to successfully refactor the code which most directly supports YAGNI. Testing, of course, supports the ability to refactor, as does use of the proper tools. (XP does not specify what these tools might be.)
Not to put too fine a point on it, XP really does recommend the "shortsighted designs" mentioned below. We say that if you aren't set up for rapid change, learning, flexibility, and refactoring, you can't compensate for those lacks by any amount of design. What you need to do is get set up that way. Once you do so, you will find that your progress is faster when you DoTheSimplestThingThatCouldPossiblyWork, invoking YAGNI always.
Here are some concerns one might have about YAGNI, with short responses.
YagniMightLeadToPrematureOptimization.
YAGNI might lead to overspecialized or insufficiently general solutions. See the"coffee machine" problem. XPers do specifically state that if the spec called for one product, we would implement a one-product system. Two separate XP teams at C3, unprompted, did it that way. Worked just fine.
Contributors: DaveHarris DaveSmith PeterMerel AamodSane KielHodges EricMiller RonJeffries
Refactoring in process. Please help. Some of the original posting and commentary are left intact below. I've captured the rest - if you want it back email me. -- RonJeffries
The thoughtful ideas below on misapplication are not XP doctrine. Basically, we never abandon YAGNI in the XP context. The other two sections of Aamod's offering are so good I'm going to steal them. -- RonJeffries
YouArentGonnaNeedIt is a good mantra, but misapplication leads to situations like Y2K (at least according to Alistair, see the pages YouArentGonnaNeedIt and YouArentGonnaNeedItDiscussion?). So I am trying to come up with more precise criteria for choosing and abandoning YouArentGonnaNeedIt. -- AamodSane
What constitutes misapplication? We abandon YouArentGonnaNeedIt when
I like Ron's definition, YouArentGonnaNeedIt means (and only means) you don't implement functionality for which you do not have an explicit user requirement.
But people who believe yagni leads to y2k, (I am not sure whether I am one of them, by the way) claim that you use the slogan to ignore perfectly reasonable requirements that the user did not make explicit today, but is very likely to tomorrow. In other words, would you design coffee machine software to only handle precisely one recipe, regardless of what the user "wants"? I would not, because with a little forethought, at least in this case, it is no more difficult to leave in hooks for multiple recipes, while supporting only one recipe. If someone used Yagni to claim otherwise, I would protest. On the other hand, I would also protest if you wanted to add a database for recipes when only one was requested.
Where lies the boundary between over-design and under-design? I would like it if slogans like YAGNI or SimplestThing can be defined precisely so that application and misapplication are easily detected. You could claim that refactoring makes the over/under distinction moot, and so on. But then you end up saying that YAGNI makes sense only in the context of all other ExtremeProgramming practices. That does not seem right either, because I feel that YAGNI is a generally useful slogan. -- AamodSane
Well, it might be a generally useful slogan. But in XP it doesn't have to be watered down, qualified, analyzed. It just works. In some other context one might have to drone on for ages about it. Of course, I just might ... -- RonJeffries
"It leads to overspecific implementations : Yagn several recipes for this coffee machine"
If Business asks for a coffee machine with one recipe we should guess that in the future there will be more, and we should act on that guess by pre-designing the ability to handle more than one.
Aamod, I absolutely am saying that if the customer has asked for a single recipe coffee maker, then I should produce a single recipe coffee maker. In fact, even if the next EngineeringTaskCard? in my stack is the second recipe, I should still produce a single recipe coffee maker. Let's look at the economics of this insane statement.
Following yagni in this case will result in me getting my task done sooner, because I won't put in the extra infrastructure to manage a 1-N relationship, my tests will be easier to write, and I won't have to guess at what such a not-yet-existing requirement would look like should it appear. Fast feedback is the key to fast learning, so I learn more quicker this way. If my brilliant suppositions about how the system wants to be structured are contradicted by the system itself, better to know that sooner than later.
Following yagni in this case will result in a better solution to my current problem. I will have more attention to pay to solving the recipe problem, because I'm not also thinking about the N recipe problem at the same time. I may even do such a good job that I will crack the N recipe problem without even considering it (hard to imagine in this case, but the analogous thing has happened to me many times with bigger problems). That is, I would have spent more to get a worse solution by thinking ahead.
Following yagni in this case will result in less project risk. The risk if you put in one extra 1-N where it isn't really needed is small, because you're a real smart guy. If I do it a few times, though, I get lost. I'm not so smart that I can do anything more than do the very best job possible on the problems I already have. Borrowing new ones kills me. I see too many projects sink under the weight of accumulated risk to ever want that to happen to me (again).
Let's assume that the absolute worst possible thing happens, and the customer asks me to add a second recipe later. I put it in. I run the tests. Everything works. It goes into production. Because I build with objects, because I write the tests, because everything that can be is dirt simple, because I pair program, because I RefactorMercilessly, the cost of the change in the future is not much higher than the cost of change to begin with. So it isn't worth the extra effort, time, and risk.
This was the lightbulb for me - changes later don't cost much more than changes now, so wait until you are sure before making decisions. This is the opposite of what I was taught in school, where a bug in requirements cost a dollar and a bug in analysis cost ten dollars and a bug in design cost one hundred dollars and... The cost curve for modern technology, reasonably applied, is damn near flat.
You really really aren't going to need it. And even if you know you will sometime, you're better off acting like you aren't until you really really do.
-- KentBeck
This is similar to thinking you might as well plan to throw one away, you will anyway. The act of building the first coffee maker will usually provide sufficient knowledge for building the second, more complex coffee maker. Even if the first coffee maker is poorly designed for extensibility. If you have to start again from scratch, you will have a prototype under your belt, and the knowledge (or scars) that go along with that prototype. -- BrianStPierre'
"Changes later don't cost much more than changes now" - yes, this is the key. YAGNI fails when it isn't true. It is true more often today than it was 30 years ago. This is partly because our tools are adding infrastructure behind the scenes - and they do it in advance, before we know we need it. -- DaveHarris
Changes later do cost more than changes now. Doing it right the first time is always cheaper than doing something again. Sure, there is little cost in performing the changes themselves, but that is disregarding the lost effort used in producing it the first time. Each time the same section of the product gets reworked, the lost effort becomes more costly. It does depend on the scale of the changes. Dumping a man-week of effort is probably not a big deal. But when a section of a product gets re-worked a dozen times or more, the cost rapidly increases to man-months of lost effort. -- EricMiller
"Doing it right the first time is always cheaper than doing something again." I have this annoying habit of not doing it right the first time. So the question for me is not how I can spend the time to do it right the first time, it is how I can deal with the fact that I don't do it right the first N times.
One response I had was to figure I was at fault, and stare at the ceiling until I was SURE I had it right. I programmed slower (like 10x or 100x slower) and didn't have much fun.
So I decided to go the other way. What if I didn't care that I got it wrong? What if I acted like doing a good job today of solving today's problems was enough? What if I coded with a partner, wrote lots of UnitTests, made things really simple, and fixed every earlier mistake I found?
I program faster and have more fun. Now I think that programming is learning. If I don't change the design after an hour, a week, a month, a year, then I've failed to learn.
And here's the crazy thing - my systems cost less to build, short term and long term, and (more importantly to my mind) are far more predictable in schedule.
-- KentBeck
Staring at the ceiling is a waste of time. But I was not talking about a future perfect implementation, but that sometimes generalizing a problem leads to simpler solutions. There is probability problem that goes something like there are three doors with a lady behind one, a tiger behind another, and fire behind the third, and you are supposed to make the best guess. The easiest way to solve it is to imagine infinite doors. I wish I could remember the details. Oh well. Anyway, this happens for software designs as well. An example would be better than a flat assertion, but I can't think of one at the moment.
Also, Yagni makes sense only in the context of refactoring, fast learning, ability to change designs (including interfaces) fast, a repertoire of CodeSmells and so on. That's a lot of presuppositions.
-- AamodSane
This is basically InventorsParadox. Creating a more general solution is sometimes easier than creating a solution to solve just this specific problem. I first learned about it in my algorithms class in college - but it looks like it applies at higher levels of abstraction, too.
-- JasonBucata
Which of the items you mention are undesirable or difficult to achieve or hard to learn to do or hard to convince someone that you should do? I think that XP requires fewer skills, but certainly different skills, than the alternatives I've used or seen. If you expect refactoring, learning, etc., and create a culture that expects them, they happen. -- KentBeck
Well, obviously, not doing yagni cannot compensate. It's just that I don't want people running about shouting Yagni to justify shortsighted designs. If your program culture is not setup for rapid changes, refactoring, if interfaces get cast in stone and so forth, you presumably need to think ahead more than with ExtremeProgramming. This does not imply that you will think ahead successfully ;-) -- AamodSane
I agree with the statement about interfaces. It seems to say very careful design and foresight is required for external interfaces (COTS?) or shared interfaces (shared with other independent groups). This seems to contribute to the programming in the large concerns about XP. -- WayneCarson
"Doing it right the first time is always cheaper than doing something again." - YAGNI does not advocate doing it wrong the first time. What you produce is always what you need, so it always needs to be right. At worst, it may become wrong later as the needs change.
"Each time the same section of the product gets re-worked, the lost effort becomes more costly." - in practice it's not that bad, because the re-working is refactoring rather then rewriting, and refactoring is much cheaper. You're just moving code around rather than producing new code.
When you code your single-recipe coffee machine, most of the truths you include about coffee and recipes are not going to change. Much of what was right once will stay right. I guess I see "object orientation" as choosing the inviolate features of the domain as the ones to structure the code around. This isn't quite the same as anticipating change; it's more emphasizing the non-change. I think this is happening behind the scenes with a lot of the refactoring patterns.
-- DaveHarris
Later... exploring this a bit more...
Would a single-recipe coffee machine have a Recipe class? If it has, then it'll be easy to turn it into a multiple-recipe machine.
In practice, people don't ask for single-recipe machines, they ask for bare coffee machines and the notion that it might make more than one thing isn't mentioned. Would a recipe class be discovered anyway, eg as part of OO analysis, or to make the UnitTests easier to write, or as part of general refactoring? Should it be discovered, or does YAGNI rule out this class?
If we don't have the Recipe class, the policy of what the machine makes will be interwound with the mechanisms for making it. An early step in refactoring will be to tease these two aspects apart. Will lumping them together and later separating them be significantly harder than keeping them separate in the first place? I suspect it will.
On the other hand, by the time we come to this refactoring we will have a great deal of working code from the single-recipe case, which provides a supporting framework which will make it easier. We'll also have a bunch of UnitTests. So although we've created some extra work, maybe it won't be too hard to do.
-- DaveHarris
Hmm... Regarding the early one-recipe version, how much lumping together could have occurred? I'd have to say that yes, the Recipe class would've come out even a one recipe machine. for example, where is that recipe being stored? A whole bunch of hardcoded values scattered through the code; well, let's fix that: define a few final variables. Now we've got a list of related vars cluttering up the class definition. That ain't coffee I'm smellin'. Factor them out into a new class... seems like just good ol' refactoring to me. -- CareyUnderwood?
It occurred to me today that the Y2K problem is an example of applying YouArentGonnaNeedIt. -- AlistairCockburn
Not all that much. A better characterization of Y2K would be PrematureOptimization. :) -- RonJeffries
Or missing UnitTests. :( -- DaveSmith
Or poor factoring and the widespread use of languages that impose a high cost for good design. Also, one who believed that you GoFasterWithRefactoring would have found many opportunities to fix the problem painlessly long ago. -- KielHodges
In which year or at what moment in time did the Y2K question shift from standard YouArentGonnaNeedIt party line - the customers did not want to pay for it and no one expected either the hardware or software still to be running - to being appropriate for refactoring and redesign? Personally, I think this is a relevant and interesting question for this discussion, since it is a historical question we can examine, and it clearly made that shift. As a side question, how would one distinguish YouArentGonnaNeedIt and PrematureOptimization? We didn't need the extra 2 digits back then, so we optimized them out. -- AlistairCockburn
Agree the Y2K question is interesting. However, YAGNI is about NOT doing something you don't need, PO is about doing something (optimization) that you DON'T need. I don't know if programming with 2-digit dates is easier than with 4, but it doesn't seem to me that it would be. That makes me think that making them 2 was an affirmative choice and that it was in fact a violation of YAGNI to go to 2-digit dates. "You aren't gonna need to save those two bytes, don't write tricky code to do it." -- R
There's something a little tricky about this response, though, isn't there? Ron doesn't know if 2-digit dates are easier, but guesses they wouldn't be. That's fine, but tells us nothing about Y2K being (not) a violation of YAGNI or anything else. I'm guessing myself that somewhere, some time a conversation like this took place:
One: now that we're moving our system from 80 col. punched cards to disk, maybe we should extend our date library to handle four digits, you know, when... Two: nah, you aren't gonna need it.
It would be very interesting to speculate on whether <your favourite methodology> would have prevented Y2K, but also very hard. Is there anyone contributing to Wiki who was around when these decisions were made and could tell us WhatTheYtwokDrivers were?
Is this not an example of one possible reason WhenXpIsUnpopular: too much "if XP had been used then good things would have happened, I so assert" rhetoric, which presents an untestable hypothesis.
The 2-digit year issue is probably more about stereotyped conceptualization. Cramming the entire date/time stamp into a 16-bit integer has been around so long and is such a pervasive thing, that when programmers shifted to more straight-forward representations (or did conversions from an underlying integer one), it never seemed worthwhile to add code for something that seemed self-evident ("of course it is year *19*xx..."). The idea of "optimization" probably never entered into later implementations, as this was an issue only for the earlier and legacy (e.g. DOS filestamps) integer storage. Perhaps you could call the Y2K problem an effect of a LazyInheritance pattern? Only lately have people begun realizing that the *20*xx year is imminent. -- BoLeuf
(1/1/2000): Well, the world didn't end after all. Perhaps YouArentGonnaNeedIt was the right way to approach y2k after all. We tend to focus on the costs of remediating the comparatively few systems that did survive to the end of the millennium. But think how much money we saved by not fixing all of the systems that died of natural causes long before the rollover. -- JohnBrewer
Some people thought it prudent, and necessary to inventory, investigate, classify, and remediate the real problems which were introduced by the Year 2000 problem. They have businesses and enterprises which have survived and succeeded because they fixed problems which "needed" to be handled. To approach any problem with a flippant "You are not going to need it", is certainly counterproductive.
The financial community however came to the conclusion that the "Bubble of 2000-2001" and the unrealistic promise of undefined, unplanned, ad hoc software development leading to huge rewards did however fit the description YouArentGonnaNeedIt.
Generally, I agree with the principle of keeping things simple and focussed without extraneous clutter and also agree with DaveSmith that the development style you advocate is a lot easier to apply in a small team interpreted environment than a large team compiled one (which is why I chose the former over the latter).
However, there are cases where YouArentGonnaNeedIt can be a way of storing up problems for the future - leaving the century out of the date is one rather obvious example. It is ironic rather than a justification of YouArentGonnaNeedIt that code written by people who thought a leap year was fully defined by being divisible by four will not have to be modified but that written by people who knew the hundred year rule (but not the four hundred year rule) will fall over. You can't count on being as lucky as this every time.
In a reasonably mature environment, certain classes of algorithm tend to get reused whether they were originally designed for reuse or not - anything to do with dates, money or business rules in my experience. If they are incomplete in some way, it may be years before a condition arises that causes the algorithm to fail. I have witnessed such a failure cost stockholders and customers a good deal more than an afternoon of a programmer's salary.
So, I would add this qualification to YouArentGonnaNeedIt - don't bother adding functions that you don't have a specific need for right now but if you do implement a function that looks from the outside like a general purpose one, implement it completely. -- SeanOhalpin
See RefactoringYtwok, BusinessAndDevelopment, CobolCausesBrainDamage, LazyOptimization.