More Objections To Working Test First

KayPentecost asked the XpMailingList to argue with itself. We resisted with all our might, but eventually blurted out ObjectionsToWorkingTestFirst. From there it was like eating peanuts...

Legend: Pro TDD arguments, Con TestDrivenDevelopment arguments.

Ward's Wiki can testify at WikiObjectionsToWorkingTestFirst


I spoke with a senior QA engineer of a waterfall type shop two days ago that might be relevant. Not strictly TDD, but pro automated test and design for testability. Here follows some paraphrasing or our conversation:

... QA: We're doing professional QA here. Have you heard of the X company? The developers there are testers as well. They write their unit tests by themselves. ( Note: AFAIK X is not even remotely an agile or TDD practicing shop, they just have allocated developers to take care of some of the QA tasks )

I: So who is writing the test cases at your company? QA: The QA department. We have a test plan with strategies on how to test every bit of functionality.

I: Do the developers design software with testability in mind, so that they can ease your job? QA: No. It's entirely the QA team's responsibility.

I: Do you get many defects while testing your products? QA: Yes. The bug tracking database has 500 to 800 defects listed for a regular 6 month project. A typical 20 screen installer might contain 50 to 60 defects. What would we be doing without the QA team?

I: Do you automate tests that show defects, so that they don't happen again? QA: No. It is too hard and requires too much time. We'd fix many more defects instead. ...

That's the most important part. I only mentioned TDD, but did not want to get into evangelizing. Apart from the heavy process he described a good team athmosphere and sound discipline - they are meeting their deadlines most of the time :-). What struck me is the separation between developers and testers... and no willingness of the former to ease the job of the latter and vice versa. I have the feeling this is the "We're developers - not testers. Let the QA's write those tests and catch the defects." attitude.

Maybe I should have said good intra team athmosphere and not so good inter team one. This is a speculation though - I don't have further evidence for it.

HristoDeshev


HristoDeshev wrote:

 > QA:  We're doing professional QualityAssurance here.

In this context, I think "professional" means "expensive".

 >  I have the feeling this is
 > the "We're developers - not testers.  Let the QA's write those tests and
 > catch the defects." attitude.

I think this attitude is usually coupled with an organization where "there's a production bug! You've got to fix it in the next 2 hours so that QA can rush it through their 3 day test cycle!"

TomCopeland


"In a Web application built with ASP and SQL Server there is no way of writing test functions that first utomatically triggers the ASP functions that fill the database with the proper data values, which in turn are needed to be able to test the ASP functions you're writing tests for (which relies on that same data)" I.e. it is "impossible" to automatically test ASP functions which rely on data created by other ASP:s.

--MattiHjelm?

But that's not even remotely valid, ASPUnit at SourceForge allows you to do just that...

--RichardQuinn?


In my experience over the last few years, I haven't had much difficulty getting developers/teams to see the value of having an automated unit test suite with complete coverage. Once I show them a real-life example of a codebase with complete tests and show how effective these are as a "semantic compiler", the question turns quickly from "Should we do automated unit testing" to "OK, so how/when do we write all of these tests."

Where I have had considerable difficulty is in convincing people to use TDD to develop the code, rather than to write the unit tests as a second step.

Standard argument: ''"I just don't think that way. I have to get the code written before I know what to test."''

Implication: "Don't force me to do things your way--everyone has a different style that works for them."

Some points I try to make.

--JeffNielsen


PaulOldfield1 wrote:

 > "You only have one interpretation of the requirement, giving potential CommonModeError?".

This is why there are acceptance tests. What alternative are the people presenting this "con" considering? Not writing tests at all, or writing them last, is surely not better?

--RonJeffries


Bayley, Alistair wrote:

 > The most ...mmm... unchallenged objection I've seen is that writing tests
 > for legacy code is (too) hard. Yes, it is hard. You have to decide if it's
 > worth it or not. But then writing tests for legacy code isn't TDD, is it?

I think this is the crux of where the resistence comes from. People who haven't done TDD, equate tests with drudgery because the testing they have done (assuming they have done some) has been after the fact in a /big bang/ fashion. And, writing the tests doesn't have any of the /fun/ factor of writing the solution. What they are saying underneath is "test code doesn't involve challenging problem- solving", or "nobody ever sees the end results of the test code other than the QA department - all that time spent on them I don't get to see in a production product - which is the real thrill of developing software".

TDD takes the boredom away by:

1. Slicing it up into little tiny bits interspersed with the fun stuff. (Just like your Mom did with your cauliflower) In the end it becomes fun stuff as well itself because it does have the /challenge/ aspect (especially when working in pairs) - "What's the simplest thing I can write to break what I've got so far". Plus, let's face it - breaking stuff is just fun.

2. Knowing when you are done. You don't have that sinking dread about those hours of tests you're going to have to write at the end of the project. You know it works because the green bar told you so, and until the needs change, it will stay that way.

On the "long time" front, it doesn't take a long time in the way that people think about it in traditional ways (i.e. in "subjective terms") -- each test takes a very small amount of time, at which point you get to write some more code. You /NEVER/ have to sit down and spend a long time /JUST TESTING/.

On the "it is hard" front, Alistair hit the nail on the head -- it is hard for already written code. But in TDD, you are always writing the /SIMPLEST/ test possible. How hard could that be? (Okay, simple != easy, but at least the problem is bounded pretty severely)

Do those hit the mark?

--PatrickOShaughnessey


Hey, J.B.,

 > > How could we test something like that?  Has he ever done test-first long
 > > enough to get to that eureka point of "Oh, that's how it works"?
 >
 > Well, no... that's the point: he won't try it because he's already
 > explained to himself why it doesn't make a difference which he does
 > first; therefore, the way he does things /now/ (code first) is
 > good enough.

"Don't confuse me with facts; my mind is made up." My father used to say that.

So you've countered his arguments and none of your arguments have tempted him to *try* it long enough to *prove* you wrong?

That's the problem, isn't it?

Some developers see the power of TDD right away. Othere resist, and then finally try it. Most (all?) of the resist/try programmers tend to be sold...

And we classify someone as a good programmer if they are willing to try TDD....

when someone *tries* TDD, really gives it a shot, they find they write better code faster (or almost as fast), spend less time in the debugger, and (probably/maybe) get hooked on the green bar reward loop.

Seems to me that the people we can't convince that TDD is good, are people who won't *try* it.

And when we respond to their reasons why it won't work we tell them it will work.

And yet, what they are saying is that they won't *try* it. So we need tofind ways to respond that get them to *try* it... not that tell them they are wrong that it won't work.

Dale talks about using resistance as a resource. And we're countering resistance to *try* with telling them why they are "wrong" in their idea of TDD, not using their resistance to try.

So I'm thinking about how to get reluctant programmers to *try* TDD....

That's the path I'm on with this....

--KayPentecost


Hi Patrick,

 > I think this is the crux of where the resistence comes from.
 > People who haven't done TDD, equate tests with drudgery
 > because the testing they have done (assuming they have done
 > some) has been after the fact in a /big bang/ fashion. And,
 > writing the tests doesn't have any of the /fun/ factor of
 > writing the solution. What they are saying underneath is "test
 > code doesn't involve challenging problem-solving"

I was thinking the same thing. Of the three objections (takes to long, boring, and hard), I suspect that "boring" is the key one.

After all, coding takes a long time, too, and it's hard. Debugging takes a long time, and it's hard. But coding and debugging are fun! They're fun because they are about solving puzzles.

Testing /after/ the fact is boring because it isn't about solving a puzzle. Writing tests before code is fun for me, because it's about solving a puzzle: What other tests would I need to write in order to specify the /entire/ problem I need this piece of code to solve?

And once I've written the test, writing the code is still fun, because now I'm solving the puzzle of how to make the code satisfy the new test plus all the old ones.

If someone is raising the objection that TDD is boring, plus several other objections, I'd tackle "boring" first. I'll bet that if we could show that TDD is fun, more people would try it.

  And once they try it, they'll be in a much better place to know 
how hard it is and how long it takes.

And I'll bet that if they found TDD to be fun, they'd judge "too hard" and "takes too long" differently. When I see assessments like these, I wonder, "too hard compared to what?" Right now, they're probably seeing TDD as purely additional work, not as something you can do instead of, say, debugging. Oh, they may see that it would save debugging time, but that means reducing their fun, so they aren't too keen to compare TDD against debugging.

But if they saw that TDD was as fun as debugging (or nearly so), then their fear of losing fun wouldn't cloud their comparisons. They'd be trading this fun thing for that fun thing. Then they'd be free to compare TDD against debugging (or other activities) along other dimensions, such as which is harder, or which consumes more time, or which creates more frustration or pressure, or which gives you, your boss, and your customer more confidence that the program really works as advertised, or ...

--DaleEmery


Mattias, thank you for bringing up a memory in my head :).

Mattias Vannerg�d wrote:

> "Hard to write test" - this is true both before and after applying TDD, but if you work in pairs, then it will be somewhat easier - if you work with someone who's better on writing tests than you are. After a while you will actually be good at writing tests, too...

I've been lurking on this group for about a year. In the beginning when I was just trying out TDD I heard a particularly convincing idea. I believe it was PhlIp who articulated it at that time: "A test that is hard to write is a design smell". The strange part is I did not even question it - just started thinking about testable design. Of course it would be easier to make it testable with a pair.

HristoDeshev


From: pier AT nbnet.nb.ca

LarryBrunelle? wrote:

> With all the traffic on this list, this may be > redundant; sorry if so. > > Has anybody offered this objection: > "I want to, but management is skeptical, and I don't want to fight a battle. I just want to keep my job."? (A believer, perhaps, but not an evangelist.)

I have a possible solution path to this one. I have found that managers are more likely to "get" TDD than entry level developers. This is specially true if the managers have a lot of hands on programming experience. The above objection would go away if his manager asks him to use TDD.

From my experience as a manager, I haven't found that to be the case. The programmer who will automatically follow a manager's directive, especially when the manager is not looking over his shoulder, is extremely rare. One needs to persuade someone to make changes not mandate them. The GingerFactor means he should OnlySayThingsThatCanBeHeard and, if he needs to say something else, he should FirstCreateTheMailbox.


See (or add to) WikiObjectionsToWorkingTestFirst

See HowYouWentExtreme


EditText of this page (last edited January 20, 2004) or FindPage with title or text search