Fake It Until You Make It

An AlcoholicsAnonymous pithy quote. Roughly, "Act as though you know what you are doing, and understanding/enlightenment/success will follow."

This is not an exhortation to PlayHurt, it's a belief that QuittersNeverWin? is a LifePattern?.


Also a TestDrivenDevelopment pattern:

I'd like to revisit a comment that JohnRusk made on that page:

It seems to me that one danger of TestDrivenDevelopment is that developers may _not_ take that step that you take. I.e. developers may stay with overly concrete code that satisfies the tests but not the "real" requirements.

I've found that this can indeed be a real problem if you make use of KentBeck's FakeIt TestingPattern. Consider this scenario: You've had a long - yet nevertheless fulfilling - day of testing and coding, and you've accomplished much. Just as you're getting ready to leave, you realize that there's some aspect of the code that isn't being covered by the existing test suite. Not only that, but you know that as soon as you write the test, the test will fail, because the existing code itself doesn't consider that aspect. So what do you do? You can:

Wait wait wait you've got it totally backwards. The whole point of FakeIt is you are *not* writing code unless there is a test for it in place already. If you're looking at your code and seeing something that isn't being tested, you've failed to FakeIt. FakeIt is not the solution to this problem, in fact failure to FakeIt is what *caused* the problem you are describing in the first place. -- CallumLerwick

Reading this, the option that struck me was: Write the test. Put running the test on tomorrows' TODO list. Go home. Come back tomorrow, and run the test. -- EuanMee

Why not do a CleanCheckIn, write the test, and then leave it failing on your development machine to pick up the next day? Basically the same thing, but with the working state already in version control -- JonathanTang

It's the last one ("Write the test and cause it to pass by doing a FakeIt") that seems to cause the most problem for me: You've inserted what is, in effect, an unlabeled (and thus easily forgotten) ToDo item into the code. And it can be days or even weeks between the time you added the faked test and the time it comes back to haunt you, causing you to spend an inordinate amount of time trying to to figure out what the hell is going on.

So what's the solution? It seems that any of the "good" alternatives involve adding an item to your ToDoList. Okay, but I would assert that this is in conflict with DoTheSimplestThingThatCouldPossiblyWork. It's certainly not the simplest thing, because you've got related information in two separate places (the code base and the ToDoList). And it's very possible that it might not work, because there is no mechanism to ensure that the information in the two places stays in sync.

Therefore, I suggest that a UnitTest actually ought to have three possible outcomes, rather than two: Pass/Fail/Faked. ("Faked" could also be interpreted as "Incomplete.") "Faked" occupies that nether world between "Pass" and "Fail": It's like "Pass" in the sense that the code itself actually does pass (i.e., no assertions fail). But it's like "Fail" in the sense that it's reminding you that you've still got work to do (but perhaps it could wait until tomorrow...). Furthermore, those reminders are there, embedded in the code base and displayed by your testing framework, right where they belong.

Does this conflict with DoTheSimplestThingThatCouldPossiblyWork? In the sense that 3 > 2, sure, it's less simple. But I think it does a much better job when it comes to the "possibly work" end of things. And it also neatly does away with the contradiction between the BrokenTest and CleanCheckIn testing patterns. -- SteveSchafer

Instead of writing a test and letting it fail, consider writing a test that passes temporarily, that is, use a UnitTestAsTickler. --

That's a good workaround, although it does require that you decide ahead of time how far in the future you want to set the "alarm." I think it falls short of an ideal solution, however, in that it moves the "incompleteness" from the implementation to the test itself, which makes me feel a little uncomfortable - like I need to wash my hands after touching it. I wouldn't go so far as to say that it's a TestSmell?, but it does seem to make the test less pure.

One recommendation I've read that fits both is to check in the working code and then add the failing test as a tickler to yourself. You communicate to the team that you will work on this section tomorrow. There is also the question as to whether the code is ready for checkin if you know there are additional tests that must be fulfilled. -- John Elrick

Rather than try to recommend a solution to the problem, I question whether it is a real problem. The premise suggests that at the end of the day, but before the programmer walks out the door, he receives a sudden flash of understanding of the purpose of the program that no one else had seen nor could be expected to see. Note that we are not talking about an interesting refactoring, we are talking about a new and necessary program operating mode. Before taking any action on such a change, I would suggest discussing it with others or even a user first.

That does seem rather unrealistic - my flashes of understanding usually occur in the parking lot, after I walk out the door.

My flashes of understanding sometimes occur in the shower or sitting on the can. At some point there has to be a level of personal responsibility for remembering, be that a notepad kept in a back pocket or a message left on one's own voicemail. -- MiloHyson?

Our version control system lets us split a class into packages. We have a package called "failing tests" that includes tests that we haven't got working yet. These are methods, not classes, since the test fixtures are all working. Whenever we find a bug, we write a test for it and put it in failing tests. Thus, failing tests acts as a ToDo list for us. We do not count the failing test package when we say "never check in broken code". We don't move test methods to the failing test package, either, we just move them from failing tests to one of the packages for working tests. Once they work, of course. -- RalphJohnson


CategoryTesting


EditText of this page (last edited February 3, 2009) or FindPage with title or text search