this page is the second part of DoTheSimplestThingThatCouldPossiblyWork
I am working on a systems integration project that has gotten itself into what I call a SpaghettiArchitecture? - a bunch of (generally small) programs that connect one database to another, or one app to another, and so on. This has been a result of DoTheSimplestThingThatCouldPossiblyWork. "Well, we need to get this data over there." "Oh, just write a little batch transfer program." The result rapidly becomes an unmaintainable mess - everything depends on everything else, the business logic is scattered, and there's not even a single codebase to RefactorMercilessly.
So perhaps we can say that DoTheSimplestThingThatCouldPossiblyWork is a reasonable approach within a single program or codebase (provided, of course, that we RefactorMercilessly and UnitTest to keep it from becoming FlimsyAndBarelyFunctional). But on an architectural level, it is absolutely necessary to do a BigDesignUpFront (or at least a SmallDesignUpFront?) - put the right architectural components in place, plan for future expansion, decide how we're going to let our different apps and programs be loosely coupled, and so on. -- RobertEikel
This doesn't sound like DoTheSimplestThingThatCouldPossiblyWork at all to me. It sounds a lot more like DoTheQuickestThingThatMightWorkForAWhile. Programs were simply plugged in without any particular thought to the overall picture. I've never read or heard of anything in XP that says this is a good idea.
The simplest thing definitely includes the Architecture as well as the Code. In fact, one of the first steps in heading down the XP path on a project is choosing a metaphor for the overall system. Whether the metaphor is an assembly line or a power distribution grid or a bulletin board or whatever, it gives an overall shape to the various code parcels (programs) in the system. Yes, this is architecture design. It's also upFront. But, it should also be DoTheSimplestThingThatCouldPossiblyWork. Don't choose a system metaphor that is more complex than required. Yet, this leads me to a question. Does RefactorMercilessly apply to the architecture/metaphor as well? It might need to, because over the 5 or 10 or 20 years of lifetime of a system, the needs of the system can change enough that the metaphor no longer serves well. -- DougSwartz
Regarding Programs were simply plugged in without any particular thought to the overall picture. My reading is that that is exactly is meant by DoTheSimplestThingThatCouldPossiblyWork, and it is the exact recommendation of XP. However, ... XP contains two pieces, a predecessor and a successor step: First, RefactorMercilessly, until the system is in its simplest, tidiest, neatest form. Then DoTheSimplestThingThatCouldPossiblyWork, then, RefactorMercilessly again, until the system is in its overall simplest, tidiest, neatest form. Doing DoTheSimplestThingThatCouldPossiblyWork without doing RefactorMercilessly gets the results Robert describes, not XP results. This is my reading of XP. -- AlistairCockburn
I think Alistair is right on the money. Individual practices, taken alone, is not where XP gains its strength. It is the support each of the practices imparts on the others that make it viable. Robert even alluded to it in his statement, ?DoTheSimplestThingThatCouldPossiblyWork is a reasonable approach ? provided, of course, that we RefactorMercilessly and UnitTest (I took out the part about single code base because I don't believe that's a constraint on using XP practices). So, the first time the little batch transfer program needed to be written, that one off solution might have been the simplest thing without much need for refactoring, etc. Maybe even the second time another one-off solution was best (maybe not). But when you start talking three or more times of doing this, you need to step back and start looking at the whole and start refactoring to get to a better solution. And, of course, since you have the unit tests, you can change things without worrying about breaking the system. So at one point DoTheSimplestThingThatCouldPossiblyWork was employed correctly, but some time after, the other supporting practices were not used to make sure that the solution remained that way. -- TomKubit
A little bit philosophy about, what I would call, MozartPrinciple. My colleague Dr. Meinhold once said that good code is like Mozart symphony . If you add a single note or you take a single note, harmony is destroyed. If you add a single line of code or you take a single line of code, good code will break.
There are a lot of examples where good code in hands of not skilled or not trained or not experienced programmers, turns in shortest time into chaos. Bad code never turns so quickly into such chaos. This instability of good code comes from "higher energy" level. We can say that good code is like a ball on top of the hill and bad code is like a ball in valley. But because of "higher energy" level good code can evolve easily and rapidly ("maintenance", "extensibility", "expandability", "can grow in not predictable directions")
Good code is balancing on a thin line between chaos and order, where very small change can either bring chaos or higher degree of order. Or simple to say it is like fire: Could be good servant and bad master.
-- SusaGoran?@excite.com
I've got to disagree with you on that one. Good code is robust, and will stand up to a goodly amount of abuse before failing. Bad code is brittle to start out with, and the slightest perturbation will cause it to collapse. -- JohnBrewer That sounds like a category error: robust code will stand up to a goodly amount of abuse when it's being used; but when you go altering the code itself, that isn't necessarily the same thing at all. After all, once you've changed it it's not the same code anymore.
Perhaps this notion is related to RedundancyIsInertia.
Never mind standing up to abuse. What about new features? "Mozart"-quality code isn't very useful if it can't be extended. --MikeSmith
Doesn't DoTheSimplestThingThatCouldPossiblyWork result in interfaces that keep changing?
Probably, but this happens anyway, unless everyone always designs the interface correctly first (and this usually doesn't happen.) -- DavidOtoole
The most visible interfaces will grow to a reasonable state fairly quickly. Sure, you won't get it right the first time, and probably not the second time. If you haven't got it right by about the 10th time you've refactored the interface that month, you've probably got problems. (Bearing in mind that an addition to an interface is a lot easier than a wholescale refactor, but enough of them give you cruft). The key, of course, is to be able to quickly locate the areas your refactoring breaks so that you can fix it quickly. Anyway, the point is that your major public interfaces will eventually stablise (provide you stick to YouArentGonnaNeedIt), and you can save major changes to when you do your major refactors. -- RobertWatkins
I keep seeming to get two contradictory messages. I see two interpretations: which is XP?
A)
I prefer B if I can get it. If not, I do A, and prepare to throw away the resulting code if it exceeds my stink threshold. -- KentBeck
One of the criteria I use to judge how "simple" something is is how simple it will be to replace it later. If I went wrong with my approach, or there are changes in requirements, and I can't yank the bad bits out and replace them with more appropriate bits, I was probably adding complexity to the overall system even if that individual chunk of code was simpler, and thus I probably didn't have the simplest approach after all.
Having no fear of rototilling interfaces is an important part of this, though. I regularly stuff dirt behind an interface the same way I'd sweep it under a rug; and maybe the interface gets changed when you fix it properly. For me that's not a problem; we work in Java and compile all our code in one build, so "make clean jar" is an (admittedly somewhat primitive, but effective) refactoring tool that lets us find and fix these changes. -- CurtSampson
I have two questions regarding DTSTTCPW:
1. If the organization that is supporting the development team doesn't believe in iteration based development, or refactoring, or - well - XP, is this really a responsible route to follow? Concerns here are: Once the "simplest" thing is in production, a non-change-oriented organization will clamp down on everything.
"We're sorry. That's production code. You can't change that. What if it breaks?" "This system is handling 6 BILLION dollars worth of business volume and you want to play 'Clean House'?!"
Now, I understand that the unit testing/functional testing is part of this mix. However, several of my clients would never, ever, believe our unit/functional tests.
2. I once read something Kent said or was quoted as saying that refactoring with a relational database on the backend could work, if one had a DBA with courage. (Sorry, I cannot remember where I saw this.)
Well, does anyone really think this is true? It seems to me that adding a relational database into the picture really puts a huge kabosh on refactoring your objects.
With all of this said, it seems that this part of the XP values can't really be decoupled from the rest of the value system.
Any thoughts? -- JeffPanici
I believe that systems without unit tests are horrible hateful things. I also understand though that people have to eat and most of us poor schmucks are not fortunate enough to have XP assignments. A DBA with courage sounds like an OxyMoron to me. The sad fact about a company that has the kind of attitudes you describe is that the DBAs are doing the code design as well, so any reasonable software is pretty much hopeless. The best way to deal with their stuff is to hide it all behind single facade and use serializable objects that can be refactored mercilessly (if you are using Java). I try to practice core XP values alone while I look for a better deal. One thing I do is test-first programming for all my work. I am preparing my mind for when an XP opportunity comes along and I can QuitSuddenly. -- TimBurns
I think the Beck quote is from his keynote speech at Smalltalk Solutions '99. Quoted at http://www.smalltalk.org/stsolutions99.html : On GemStone: "They don't understand what's so cool about their product, that you can change the design. In a relational database you might be able to do it if you had a DBA with courage"
So it's handling six billion dollars worth of volume, it has no unit tests, and they wouldn't believe unit tests even if they existed? My question would be: how do they really know it works right now? :-)
By definition it works perfectly now, and users and other developers must go to extreme lengths to prove that it doesn't - One who is getting far too cynical.
Anyway, the only real problem I've found with massive refactoring when you're heavily dependent on a database back-end is getting the database changes rolled out into production. (You can't just do an ALTER TABLE ADD COLUMN when you've got an 80 million row table and you're a 24/7 shop.) But our defined interface on the database side is pretty minimal; we make almost everything we do go through Java classes that encapsulate the database stuff and fix up, as much as possible, the impedance mismatch. -- CurtSampson
I'd like to say that XP refactoring shouldn't be cleaning house. It's done all the time. "Cleaning house" is what people do in the crufty world when they can't make a change without breaking the system. -- JohnDuncan
(Applicability of Investment Math discussion moved to DecisionMathAndYagni)
DoesDoingTheSimplestThingMeanIgnoringUserStories
See also XpToolsFaq
See also RealTimeWikiDesign, AnticlimacticSimplicity TheBestIsTheEnemyOfTheGood