Tests Too Slow

When TTS is Their Fault

I have worked in LAMP for >8 years now. Every year the tests are slow; every year the community promises to do something about it, and the next year the tests are slower.

The community responds via CargoCult TDD. Instead of pressing OneTestButton?, you Alt+Tab to a View showing a test runner, and click on the tests you want to run.

And the tests are usually slow because of some ambitious crud in the test rig - NOT any systemic problem.


When TTS is Our Fault

AutomatedTesting mean that we should have an extensive suite of UnitTests to ensure our project is working as expected. However RapidFeedback implies that the tests must run quickly. My tests are slow. What are some techniques for balancing these two together?

Currently I have a fairly small suite of UnitTests yet easily takes 10-20 minutes to run through them all (MockObjects won't help). As a result, while I develop, I can only really run subsets of the tests that I think may be affected to get RapidFeedback. Before I commit to VersionControl, I try to execute the entire test suite, but often get lazy (especially for possibly trivial changes).

This feels way too hacky and cumbersome. One solution may be ContinuousIntegration of the tests on a separate BuildSystem. Any other ideas? -- JevonWright?

10-20 minutes for unit tests is a VERY long time. I'd expect 10-20 seconds at the max when there are thousands of unit tests. Why do they take so long? Why won't MockObjects help?

Why are they slow? What kinds of things is the processor spending most of the time on?

Answer: The application is following ModelDrivenEngineering?. Each unit test is testing a particular model instance for an expected runtime result. Most of the time is spent in compiling and generating the runtime code from the initial model instance (although the time spent to test the runtime model is not insignificant). Essentially I have an ExpensiveSetUpSmell.

After reviewing above, I suppose these particular tests are actually IntegrationTests and not UnitTests, but that doesn't help that they are still too slow. I suppose MockObjects could help if they were used to cache compilation results (but even this isn't ideal, tests should be SideEffect-free).

OptimizingUnitTests also points out 10-20 minutes is far too long. Is an answer exploding the IntegrationTests into hundreds of smaller UnitTests? Has anyone got experience with this? -- JevonWright?

No, but it sounds like a caching problem. The time spent "compiling and generating the runtime code" should only happen when your "initial model instance" changes. That would appear to be the bottleneck - and never forget you are really using TestDrivenDevelopment where your source is these "initial model" instances...

{Maybe consider going dynamic.}


After applying engineering fixes to slow tests, use IncrementalTesting to spread their cost out and still integrate quickly. As a last resort (or maybe a first resort;), apply the ultimate hack, the InVivoTestPattern.

And beware the BigAgileUpFront!


Thanks for all the suggestions. After working further with my environment, I took the plunge and started using caching for my setUp code, improving speed, but most importantly, allowing me to break up LargeUnitTest?s into many SmallUnitTest?s. Each test class has its own cache. I moved common GeneratedCode? into a separate RuntimeLibrary?, and am splitting up tests to focus on particular areas of the system, rather than complete IntegrationTests. MockObjects will probably help as well, but I haven't worked out how to integrate them yet into the environment. -- JevonWright?

CategoryTesting


MayZeroNine


EditText of this page (last edited March 28, 2012) or FindPage with title or text search