Integration Test

Following UnitTest of a subsystem, IntegrationTest evaluates that subsystem as integrated within a larger subsystem or within the complete system, testing interdependencies with other subsystems, including other project and third party libraries, tools, etc.

Note the "following" verbiage implies you pass the unit tests before running the FunctionalTests, then the IntegrationTests. This implies a test failure, with well-defined units, should only implicate the interface between the units.

Alternately: smoke test the whole thing to see if it screws up now that some subsystem has changed; just because it passed UnitTest doesn't prove it really works; maybe it steps on the rest of the system, maybe it does what it thinks it should do but not what the rest of the system thinks it should do, etc.


Are IntegrationTests usually automated? If so, how?

As a general philosophy, all testing should be automated whenever possible, to decrease the barrier to testing, to speed up testing, and to decrease the resource cost of testing.

"How" is specific to the (sub-)system being tested. For instance

A large variety of tools have been created both commercially and open source to assist with automated testing and associated activities such as automated entry of regression test failures into a bug tracking system, saving test results into a test database, generating detailed test reports and summary test reports, emailing notification of results or just failures to project owners/developers, etc.

-- DougMerritt


DougMerritt just said that you test automation should always be done, where-ever possible, as a general principle - then listed a bunch of times when it doesn't make sense.

What's that all about?

Here's my rejoiner: Do Automated Testing when it offers return-on-investment. That is to say, when the automation is cheap to write and offers a high probability of finding bugs you care about. Amazingly enough, this isn't quite as often as some programming gurus would have you believe.

2nd note: Just like development, Testing can be an empirical process - a process where the action you take is defined by the result of the previous function. In that world, it is extremely hard to model and predict the testers behavior up front. In that case, it might be better to automate other forms of testing (for example, setup or bug reporting) instead of automating the test *execution.*

--MattHeusser

---

Contrarily...

-- PhlIp


One of the biggest challenges in executing, and especially automating an IntegrationTest usually is how you set the system under test into a desired state before executing the test, and how you assert the expected outcome of the test. The problem is that the state of an integrated system is a lot more complex than what is required for a unit test. You have to forget about stubs and mock objects, and deal with "the real thing", or at least simulate it to the best possible extent.

So, how do you do that?

Description of alternatives below follows my own learning path about this topic:

You can start from a trivial state (e.g., an empty database), and bring it to the target state by means of the system's own APIs (automatically pushing GUI buttons or invoking EJB methods to create and modify all necessary domain objects). This often turns out to be impractical, because it is slow and the system under test tends to break before you even get it to the target state, because of unrelated problems.

To alleviate this problem, you can start from a non-trivial prototype state [database dump or script with some predefined entries] and start moving towards the target state from it. This adds complexity to your process [need to manage the prototype state]. Besides, this prototype changes too much itself. And even when you have the prototype state, getting the system to the target state from it through the system's own API can have the same problems as described above.

Then, you can write your own data access layers for use in the testing code. It's not as horrible as it sounds, as it will be much simpler than the persistence layer of the system under test.

So far, we have ended with the following recipe (which happens to be the first usable one on that learning path): (1) Make some requirements about the initial state, without specifying it explicitly. Try to keep requirements as simple as possible. (2) Before running the tests, FIND an entity that meets these requirements, and use it in the test. If no such entity found, abort the test. (3) In setup and assert stages, use the system's own APIs as much as you can. But if it doesn't feel natural, or if the system breaks, roll your own data access layers without any hesitation.

As a sidenote, "automated entry of regression test failures into a bug tracking system" sounds like an abuse of bug tracking system in practically any context I can think of - certainly both for desktop business apps and enterprise apps, which is what I am mostly familiar with.

-- AlexeyVerkhovsky

Why?

It can work if test suite code is the SUT specification. Which can theoretically be the case with TestDrivenDevelopment, but in the same theoretical case you don't need to track defects, because you fix them right away.

In our day-to-day practice, however, specifications are... um... informal, and integration test is done by testing specialists, who have to interpret these specifications.

In which case a test failure is not a defect, it's a symptom. And it takes human judgement to filter defects from symptoms. And defect tracking systems are notoriously prone to signal:noise ratio problems, therefore you don't want to put unfiltered symptoms into defect tracking systems. It wastes time and annoys people.

Ah. I've certainly experienced that annoyance. But the thing to do is to make sure that the defect tracker has the right categories, to separate out the potential noise, but still track it for the convenience of whomever is responsible, and similarly have the auto email go only to that person. Yep, it's nothing but an irritant at best, otherwise.

Emailing the test run log to the person in charge of interpreting it achieves the same result without the overhead of integrating the test framework and the defect tracker. So, why bother?

You want both, because email is a transient phenomenon, where bug trackers have persistent information, and you want permanent records that are available for, for example, data mining. Just because it is only one person's responsibility doesn't mean that other people should be denied access to the data, Add a timestamp to log file name, and keep it in a publicly accessible location and therefore accessibility should be automated so as to avoid putting an extra load on the guy who is responsible for it day to day. Scenario: because of a wrong phase of the moon, all tests in our test run fail. Expected result: the guy in charge sighs and ignores them. Phase of the moon is corrected and the test suite is rerun. Actual: the guy in charge spends two hours painstakingly cancelling 1000 new defect records.

Surely enough, every such scenario can be automated or otherwise dealt with. Anyway, such automation will probably be costly and bug-prone itself, and benefits of having it seem negligible. Maybe I am simply prejudiced. Do you know any RealLife(tm) example that would disprove my prejudice?

Are you prejudiced because you have had bad experiences with precisely this, or because it just sounds bad? Like I said, I've been in environments where such things were done badly. I've also been in environments where it was done well. And, of course, places where it wasn't done. My experience is that it can be done well and that it needn't be costly to set up. Once it (like any automation) is running well, of course it shouldn't be costly thereafter.

As to bug-prone, I'm uneasy about that myself, because of the components. I've never ever seen a truly excellent bug tracking system, nor a truly excellent email system. Most bug trackers suck, in fact, and are highly bug prone. Most email systems are barely adequate and have various design-level defects. So, ok, I'm on thin ice here to talk about building something larger on top of them.

But, you know, this isn't truly rocket science, and so it has been done well at times by smart people.

I don't have case studies or URLs to point to; this isn't my field. I'm JustAnotherDeveloper? who believes in QA and automation.

Bottom line: all this stuff should be automated when it can be done well. Since I have to admit that often the base environment itself is not done well, I guess I can't flatly say that it should always be done. But that makes me sad.

Amen


More testing is not better testing.

This page seems to be reverting to the waterfall approach, where there are multiple levels of test without clear differentiation of responsibility. By adding more and more levels of test, poorer results are returned, because none of the levels are responsible for verifying the correctness of the software. Instead, keep the testing hierarchy flat.

Do ProgrammerTests (née XP-style UnitTests), do them well, and do not depend on another level of test to catch any problems.


Speaking as a maintenance programmer (at present):

Docs are garbage if you want to know what the system actually does. The only way to find out is to look at the code. Just as application code is the final, definitive statement of what a chunk of software does, unit and integration tests are the final definitive statements of what a chunk of software is supposed to do (at least, when I'm coding on my PC or terminal).

A crucial output of detailed design, then, along with the major interfaces and what they do, is a set of specifications for integration tests to test these interfaces, because I'm doing maintenance on this system without a deep knowledge of the business. That means that as far as I am concerned, if JUnit runs ok my code is right as far as I can know.


See UnitTest, AcceptanceTest, SystemTest, FunctionalTest, BlackBoxTesting, RegressionTest

CategoryTesting


EditText of this page (last edited February 19, 2009) or FindPage with title or text search