One of the ExtremeProgramming practices is to RefactorMercilessly. When you find two methods that look the same, you refactor the code to combine them. When you find two objects with common functionality, you refactor to make there be just one (see ParameterizeMethod). Extreme projects do not use BigDesignUpFront. Therefore they upgrade their designs continuously. RelentlessTesting and ContinuousIntegration (UnitTests synchronized with changes disseminated to other engineers) permit changes that would introduce the risk of bugs in slower projects.
And you change all the users of the old capability to use the new. In SmallTalk, that's particularly easy to do, because you can open a browser on all the senders of the old one and edit them to the new. (There may be CodeOwnership issues, of course.)
Since we have extensive UnitTests and AcceptanceTests, we can always refactor with the confidence that we haven't broken anything.
The result of these practices is that the system does not get held hostage by some old, hide-bound, ill-conceived idea from the past. It remains fresh and new, productive and fun to work with.
Since we DoTheSimplestThingThatCouldPossiblyWork, sometimes what we do needs improvement an iteration or so down the line. We welcome these opportunities to make the system more like what it should be, and we welcome the fact that we do it in solid knowledge of what is really needed, not what we imagined in the past was needed.
The result of this is that the system is always as simple as we can make it, which means we can understand it better, which means we can change it more rapidly while keeping it reliable.
Try it, you'll like it ...
-- RonJeffries
Is RefactorMercilessly supposed to be so merciless as to refactor across the boundaries of multiple projects and shared libraries? -- HelmutLeitner
That depends on code ownership issues. You cannot refactor it if you don't own it. On the other hand, why not refactor a shared library, the functionality and API shouldn't change.
This would mean to refactor within a single project. I asked for "across the boundaries" in a "multiple project, multiple library" situation where APIs need to be improved. CollectiveCodeOwnership for all projects and libraries is assumed. Is this attempted? What is it to attempt this in Smalltalk? Do the refactoring tools still help?
If you have refactoring tools for this, they help of course. The prerequisite is that libraries and other projects are written in the same programming language (and that language is available in the tool).
If the library used created by an open source project (REAL open source, not Apple's "you may do nothing useful but read and only until we cancel the agreement" - License), this is where some of the power of open source comes from: Developers that need something from a library can jump in and code it.
There are also drawbacks: When a large refactoring is done, often a consensus has to be reached before a "patch" is included in the library, and patches that have been forgotten because they aren't "sexy" fall out of date. Summary: Open Source has good potential for cross-project refactoring, but can use it only if it patches are quickly judged and integrated. -- PeterSchaefer
"WaterFall" theory says only "Analysis" can identify the elements of a program most depended-on by others, and these must be designed and coded first, before their clients are coded.
RefactorMercilessly says the longer your program has been committed into real source code and demonstrated performing features, the more you know about how to improve & upgrade its internal structure.
I'm sold.
-- PhlIp
This ain't near the bottom of the page, so only 'last edited' aficionados will get to read it. Responding to Phil, above...
Where can I read about "WaterFall theory"? On these pages, WaterFall is like the McCarthyist Communism of the 1950's - an acceptable thing to fear (parody on AnAcceptableWayOfFailing).
I (PhlIp) am just relating my experiences reading JobSecurity-style code. AFAIK WaterFall theory does _not_ say you write the most-depended on classes first. But that's the way everyone in RealLife does it.
I don't know what "WaterFall theory" says, but Phil is right on target with the insight about the connection between analysis and refactoring (see comments toward the end of WhatIsAnalysis and AnalysisIsRefactoring). When you are refactoring, you are analyzing. The right time to do it cannot come from some process manual or methodology - it's when you see an opportunity to express something more powerfully and simply. The project phase and the medium are unimportant. The people, their skills, and the project context are important. And it's far more important to do it when you can than to quibble about the ideal time to do it.
I fancy I'm following my own advice here, by pointing out the commonality between code refactoring and other forms of analysis. This is the essence of systems development, no matter what name you give it.
I guess the term "analysis" has some stigma associated with it (by hanging out with "WaterFall", no doubt). I've noticed that to some people, analysis is always AnalysisParalysis. Maybe I'm just an old fashioned analyst, but I still like the word "analysis" and its semantic roots. Therefore, I propose a new base class to the RefactorMercilessly class: AnalyzeMercilessly. Do it everywhere, not just in code. -- WaldenMathews
See RefactoringWithCeePlusPlus for issues and tips for XP with C/C++ -- DouglasAuclair
On SpecializationIsForInsects, RonJeffries said the following about ChryslerComprehensiveCompensation: "It's no phone system, but it is complex (1800 classes, 25000 methods), and the ordinary folks who wrote it actually understand it!"
It would be neat to see a frequency histogram of classes with given numbers of methods. That is, number of classes with one method, number of classes with two methods... The average of your numbers is ~13.9 methods per class.
I'd just guess that you could tell a good amount from a histogram. I'd expect that in well-factored systems, it would have a single peak that would rise gradually and then fall off quicker. If there are some outliers far out to the right, you could have some GodObjects. If there are multiple large distinguished peaks, that could indicate a multitude of things: inconsistent refactoring, architectural decisions that have produced categories of classes, or a fractured team. On the other hand, there probably isn't enough spread to see more than the most grievous cases.
Fair enough. One problem that I have with RefactorMercilessly, though, is that it seems to conflate intensional with extensional. Compare the following two statements:
A:These two classes implement the same piece of code (contain a similar method, whatever), and it's an essential similarity.
B:These two classes implement the same piece of code (contain a similar method, whatever), but it's pretty much coincidence.
Now in the first case, it's clear the refactoring should take place on the classes to exploit (expose ?) the essential similarity. But in the second case ? You could claim that this indicates a need to refactor somewhere else (for example: if more than one client of a server is performing the same sequence of operations, then maybe the server should support that sequence with a single method call).
But I don't know. In general, especially when the code is evolving, I'm inclined to cut the second case a little slack. Let the functionality hang for a while and see if the code evolves along different paths as more functionality is added to the system.
Related notion: Anyone care to comment on the following quicktime movie http://www.isi.edu/isd/LOOM/loom-classifier.mov
"it's pretty much coincidence" - When I think this, generally it is because I don't understand. Often moving responsibility to a shared server fixes the problem. Or I have something to learn about what is similar and what is different. In any case, I don't stress about it. If the code is talking, I'd better be listening. Damned arrogant of me to think I know better. -- KentBeck
What's the cost of getting it wrong and combining methods that "should" be separate? Ideally, it would just mean undoing the refactoring later on when the methods diverge. This second refactoring should be cheap, and its cost offset by the benefit of having the simpler system (with one less method) in the meantime. So you win overall by merging the methods.
In a worse case you make a change to the merged method that only one use needs, and that screws up the second use. You get a side effect, and a bug in some other part of the system that you weren't working on. This risk is offset by UnitTests and the like. They detect the bug straight away because they test remote parts of the system automatically, without you thinking about it. So you can pin it down to the change you just made, and figure out that you need to split the method. So this isn't something to fear, as long as you have TheFullMonty XP. -- DaveHarris
The crucial question is whether the second refactoring is actually cheap. If the method is called in a dozen places, you have to check each one to see which version of the method it should be calling. This requires a bit of brain-power; you need to understand what those other methods are doing. One approach might be to preserve the difference even while performing the merge. For example, leave the second method there but have it forward to the first method. Or have two factory methods that both return the same class. This would be less Extreme, but better than having two copies of the code. -- DaveHarris
The forwarding solution doesn't bother me, if I have two different things to say. It was such a revelation to me the day I understood the value of
highlight: aRectangle self reverse: aRectangleOTOH, if you really can't tell the difference, squish them together and have the faith that the system will tell you different when it knows different. -- KentBeck
See Also: CostOfRefactorCarry
Anyone else consider that WilliamGrosso could have refactored to :
These two classes implement the same piece of code (contain a similar method, whatever), -
A:and it's an essential similarity.
B:but it's pretty much coincidence.
or am I just being tedious?!
I think that's OK. The coincidental refactoring comes when you now see the common "it's" in those 2 choices and attempt to refactor further.
Base = "These two classes implement the same piece of code (contain a similar method, whatever), - [name]: [join] it's [clause]." A:Base = {join="and" clause="an essential similarity"} B:Base = {join="but" clause="pretty much coincidence"}-- DaveWhipp
William: please offer one example of coincidental implementation of the same piece of code that seems to you not to suggest refactoring. Thanks. -- RonJeffries
Suppose I've got two classes, each with one instance variable, and each class has an "initialize" method that sets the variable to zero. If I renamed one of the variables to be same as the other, I could give them a common superclass and just implement "initialize" once.
In practice, humans don't have much problem with these false positives. Although I'm sure someone made that refactoring at some time or other, most developers are smart enough to know better. Automated refactoring tools are not, however. This is one of the reasons that I think that tools should always be under the direction of humans, and not decide whether or not to perform a refactoring. -- RalphJohnson
I like Ralph's example a lot. It makes me think, a good trick in itself. I was thinking of duplications with just a bit more processing, such as maybe
1 to: array size do: [ :index | array at: index put: zero ]which might lead one to wonder whether collections should have a method named, e.g. #atAllPut: or #zeroize. No conclusion yet, except that I think the human should look at duplications carefully to see whether they are saying something important about the program. A good tool like [Small]lint can help by discovering candidates for consideration. -- RonJeffries
If you've got two classes, each with one instance variable, and you find out that the one and only instance variable plays exactly the same role in both of those classes and thus needs to be named the same, you might want to look at merging more than just that initialize method. -- DonWells
Okay, here's the thing I've been trying to figure out about how to RefactorMercilessly: Suppose in the code I want to refactor I have a class A with a method x. If x is part of the PublicInterface of A. Or perhaps a better example would be if it's semi-public, meaning it's not going to be used by normal client programmers but it is used by other classes in *my* code. At any rate, I probably have a UnitTest that calls x and makes sure it behaves properly. Now I'm refactoring and I decide that x's behavior needs to change or maybe the behavior stays the same but it really belongs in some other class. So I fire up my browser (or wish I had one in Java and do it the hard way) and find all the callers and fix them. But the only way I can make the test work is to change the test itself. But now it seems there's a hole in the safety net that my UnitTest was giving me - since both the code and the test changed I could have broken the code and also broken the test so it's not really equivalent to the old test. Voila, I've got a regression. I realize that some (maybe most) refactorings will leave the surface API the same so the tests don't change but are there strategies for dealing with this harder case? -- PeterSeibel
Perhaps "this harder case" is largely illusory. For the problem described to occur, it would be necessary to inject a defect in the refactoring (which isn't impossible, but if it's really a refactoring, it's difficult), and then inject a corresponding but exactly opposite defect in the test. Flaws in a UnitTest generally produce false positives, indications of errors that aren't, not false negatives. There are exceptions ... but they're rare enough that I'd not worry about it. -- RonJeffries
Hmm.. I'm not sure about this assertion. I guess it depends how you code your UnitTests. I'll admit up front that I don't use any of the UT frameworks discussed here, but I have found the occasional false-negative due to cut-and-paste errors :- a test may accidentally end up just a copy of a similar test; or expression-bracketing issues which don't actually execute the real code. PairProgramming would help to overcome this, but it's hard for me at home on my own. I do worry that I'm missing the point sometimes, though, as I seem to need to refactor the UnitTests every time I refactor the main code. -- FrankCarver
Refactoring is changing implementation without changing interface. It should be rare for UnitTests to need changing under refactoring. Perhaps you are changing interface, not refactoring? In that case, yes, the UnitTests will have to change, just as they do when you add or remove functionality. It's the price of knowing that your changes work. False negatives are OK, they point themselves out, you look at them and fix them. False positives are bad, but they're rare by the above argument. -- RonJeffries
[ Come again? One person says "false positives, indications of errors that aren't", and another person says "False negatives are OK, they point themselves out". It looks like what one person calls "false positive", the other person calls "false negative", and vice versa. FalsePositive. FalseNegative?. I think one person thinks of "positive" as "Yes, I get some error messages here", while the other person thinks of positive as "Yippee, unit tests tell me everything is A-OK !". Or did I miss the point somewhere? -- DavidCary ]
Refactoring must surely encompass changing interface sometimes....
Looking at MartinFowler's page [ http://www2.awl.com/cseng/titles/0-201-89542-0/refactor/index.html ]
I can see a few "refactorings" that involve moving methods or changing classes and adjusting all references....
Is his book incorrectly named? :-)
-- AlanFrancis
Not necessarily - moving methods doesn't change (system) functionality. Of course it changes the functionality of one class, but not of the cluster of classes involved in the refactoring. -- RonJeffries
Yes, from what I understood from the book refactoring is about changing the implementation without changing the functionality. RonJeffries comment raises issues about which parts of your system are interfaces. Since SubSystem? or ComponentModelling? don't form a part of XP there is no level of interface higher than the interface to a class.
One question I have with RefactorMercilessly is how to make sure people are refactoring. Some people I've worked with the past can barely manage to get their systems functioning at all never mind making them well factored. -- 193.36.79.206
Are you saying you can't trust your team to do what they have agreed to do? What does this tell you about the place where you are? -- rj
(Sorry, Ron, I have only most rarely seen people do what they agreed to do. Most people "agree" out loud because it is expected and less tiring than disagreeing. But that doesn't mean they agree inside, and even if they agree inside, it still doesn't mean they'll have the energy to follow through the next day or two weeks later. So on most projects, Glen's question is meaningful -- AlistairCockburn)
I'm saying that I've worked in a number of different environments and that the quality level of varies significantly. Many people I know I would have complete confidence in but there are also plenty of people out there who write complete spaghetti and get away with it. Luckily the people I'm working with at the moment are quite competent and conscientious but I expect it wont be the last time I come across programmers from hell.
Let me rephrase my question: How do you encourage people to RefactorMercilessly?
It's one of the team practices. They have promised to follow the practices. If they don't, we get to kill them. Makes a good example for the others.
I've never once worked with programmers who were not conscientious, and who did not want to do the right thing. If somehow I fell in with a batch of such people, and they didn't instantly convert to giving a ****, I'd desert them. -- rj
I think you have to do some PairProgramming with them, refactor something, and let them see that it is not so difficult, doesn't break the system, and they feel better afterwards. For a group who don't RefactorMercilessly, I would expect each person would need at least three such sessions to get the feeling. I don't think ordering someone to do something they don't understand is effective. -- AlistairCockburn
People who are trying to do good are just fine and you can work with them. I refer above specifically to a group who are not conscientious and do not care whether they do the right thing. I don't think such people are thick on the ground. If I found a bunch, I wouldn't want to be there. -- rj
I suspect that Ron is just trying to make a point. I know Ron and I can not imagine him just abandoning a project. He has the tenacity of a bulldog. -- DonWells
XpIsNotaSilverBullet -- JohnBrewer
Okay, I'm very intrigued by what I've read so far. I like iterative design and I'd love to give BigDesignUpFront the boot and refactor like mad, but there's one thing standing in my way: the RelationalDatabase. I read stories about people changing their classes as they go and it sounds great but I keep thinking, "How does he get his DBA to change the schema every 10 minutes?" Most of us aren't using GemStone for persistence (or SmallTalk, but that's a different matter). Are we just out of luck with XP? -- PerrinHarkins
See RefactoringWithRelationalDatabases
See DontRefactorPublishedInterfacesMercilessly
What constitutes DuplicatedCode? Anything that looks very similar is duplicated code. Some you can't eliminate, as it relates to the nature of the language, but much of it can be "factored together" in to something more centralized. If you can't see how to make it more centralized, then flag it with a TODO comment and maybe later insight will strike. -- Anonymous
I would suggest not to use TODO comments - see TodoCommentsConsideredHarmful. I would use a BugDatabase instead. -- GuillermoSchwarz
I would merge duplicated definitions of behavior. -- PhlIp
RefactorMercilessly means that once the bar is green, you and your partner spend as much time is required to make the code as simple as you can. Don't move on to another story while the code remains poorly factored. -- EricHerman
But it's often beneficial to wait to refactor. If you're not sure how best to refactor, put it off: DuplicationRefactoringThreshold.
TemporaryDetailedTestingSupportingRefactoring is a technique that uses even more aggressive testing, tests whose results are not be expected to remain consistent during normal development as features are added, to provide confidence during refactorings that are supposed to change just program structure, and not program output. -- AndyGlew
Do we have a case for SimplifyMercilessly? as a purpose? If so please create the page and explain, and move relevant parts from here to this new page. -- DavidLiu
Refactoring is by all means a complicated task. In-depth knowledge of, and preferably experience in implementing, the DesignPatterns is a must. XP is a set of (common sense) skills and not a methodology. XP by its nature is not simple. Maybe the Extreme in XP is that it is full of contradictions. An XP programmer teaching piano would say: "Playing the piano is simple. Just hit the keys and you hear a melody. Sit by my side and do as I do." My point is that the word simple seems to have a different meaning in XP. JimCaprioli
Where can I find case studies of large complex XP projects? AnswerMe
Regarding code duplication by "coincidence" ... "coincidence" is probably the not the best word. I think what was meant was the scenario in which two or more segments of code have the same logic, but there is no inherent reason for them to remain the same for the long term, and they can be changed independently without introducing inconsistent behavior in the system.
For example, consider a bank with 3 types of checking accounts - standard, silver, and gold. They come with different features and fees, but some of those are in common. Suppose silver and gold both pay interest if you have a daily average balance of $10000. So that piece of the code is duplicated for silver and gold accounts.
This is a nice example on the surface, but it belies a deeper issue than code duplication. If your bank account behavior is based on code as described, something is broken - this should be a data-driven design. On the flip side, if it's more complex logic (like having different formulae for compounding interest on the accounts), there is no reason to have three functions CompoundInterestStandard?, CompoundInterestSilver?, CompoundInterestGold?. Either this should have been foreseen in the original design of the system, and been built in a data-driven way, or refactored into the system later when the need arose. I think RalphJohnson's example above covers the "pure coincidence" case nicely. The "accidental coincidence" case (where the code is pre-emptively duplicated in anticipation of a future divergence) is subject to YagNi, and, as such, shouldn't even occur in the wild (assuming an ideal world, of course).
There is only one problem with RefactorMercilessly - the real world. In the real world of assigned issues and non-paired programming, your performance is measured by the number of issues you resolve. Producing well-factored code not only slows down your own progress, but it aids your CowOrkers in making sense of your code when they have an issue on it that they must handle (while their code is still a cyclopean horror when you work on it). So well-factored code makes you look like the worst developer.
Only in the short term. On projects that did RefactorMercilessly vs. more traditional, fear-based projects ("fix the issue as stated with the least code perturbation"), I've found that the time required to add features and resolve issues drops on well-factored projects. In particular, if you're in a company with at least two projects of comparable size, and only one team does RefactorMercilessly, the difference between the two will become embarrassingly apparent around the second or third major release.
Naturally, if you're in a company where there's no long-term visibility, or one that has only one product, or that specializes in short-term one-offs projects, this won't apply. -- TimLesher
There is a time and place for a company to go fast and take on TechnicalDebt. There is also a time and place to pay down that debt. I would not say that it is an ideal to never accumulate technical debt. It sometimes makes a lot of sense to go very fast. The main thing is that if the software is intended to have a long life, it is critical to keep its TechnicalDebt low. --JoshuaKerievsky
I like to do BraindeadRefactoring sometimes. Joel talks about it here http://www.joelonsoftware.com/articles/fog0000000348.html. The idea is just to simplify without necessarily creating pristine code at the end. Example from Joel: take every string of SQL and dump it into a general 'SQL' class, without worrying about class design. Example from my PhD, refactoring 16 similar algorithms by extracting common code, even when I couldn't see an obvious meaning in the new predicates I was creating. I take away from this that even ugly simplification can be worthwhile. --DanielLucraft? (first edit)
See WhenToStopRefactoring, RefactorLowHangingFruit, GrandRefactoringDay, RefactoringImprovingTheDesignOfExistingCode, ArchitectTheNegativeSpace, RefactoringMercilesslyDialog, DataTransferObject, RefactoringMetaphor, RefactoringInVariousLanguages, DirectedRefactoring