RegressionTesting is testing that a program has not regressed: that is, in a commonly used sense, that the functionality that was working yesterday is still working today.
(Historical note:)
This common use of the term "regressed" has drifted from the original strict meaning that the program behavior has backslid to some earlier, "known bad" state. In the dim mists of test automation antiquity, each regression test case would have been created in response to a known (and putatively fixed) bug.
So "Did the product regress?" was a terse way of asking "Did we re-inject any of those old 'fixed' problems somehow? Did a masking effect go away, revealing old buggy behavior for a new reason?"
It is meaningful to distinguish such test cases from ConformanceTest? cases, which would be created from looking at the spec or other expectations about the product, rather than in response to found bugs.
But in common parlance and practice, the routine testing of both gets called Regression Testing. Worse, lazy or busy people come to call the act of Regression Testing "Regression." This can get very confusing. Do use the whole phrase.
(End of Historical Note.)
In ExtremeProgramming, the automated RegressionTesting concept is used to implement UnitTests and AcceptanceTests.
Rudimentary Examples for:
RegressionTesting seems overloaded with a second more technical meaning, which is that we can simply compare the output of the program today and yesterday: ComparisonTesting? might be a better name, If something has changed then it needs attention: the programmer should check whether the change is a bug or a feature or just an accidental (unintentional, value-neutral) change. DesignForTestability here means we should minimize the number of accidental changes in behaviour, and also that we should make it easy to produce exactly the same output: for example, we should be able to force datestamps in the output to a certain value (so the TestOverridesNow). This type of testing seems good for situations where it's hard to automatically check that the output is correct, but it can be reproduced each time. We only have to pay for a human to check the output each time it changes. DejaGnu from Cygnus does something like this.
-- MartinPool
Hiring a person to check the output is GuruChecksOutput, which I believe is something we should try to get away from. One of the aims of TacticalTesting is to make it easy to automatically check whether the output is correct. -- JohnFarrell
To address the issue of the current date and time being different when you run the program again, (technical) regression testing programs often let you "mask" that part of the output, so that changes in that part of the output are ignored during the comparison.
Yes, it's common for new features to make regression tests "fail." But, hopefully one adds new features more often than changing the output of existing features. (One assumes that, for the most part, the current output of a system is correct and desirable.) -- JeffGrigg
Is the testing part of the ExtremeProgramming methodology included soley for RegressionTesting purposes or are there additional reasons? --
Within ExtremeProgramming tests are used to detect both regression and progress. The goal is to write the tests (UnitTests and AcceptanceTests) first. See also CodeUnitTestFirst.
Automated tests support RefactorMercilessly, CollectiveCodeOwnership, and SingleReleasePoint.
When constructed ExtremeProgramming style automated tests form a very accurate and interactive form of system documentation as well. -- DonWells
Have you checked out PAMIE - Python Automation Module. It's an open-source web testing tool. When combined with PYUnit it's a pretty complete Web testing Framework. Easy to use. PAMIE can be data driven from excel like WinRunner. Allows Automation via the DOM. Check out the reviews - http://pamie.sourceforge.net/news.html -- Rob M.