A Performance test is a test which basically determines if a particular function or set of functions in a program works in an acceptable amount of time. In the simplest case, JavaUnit can be used as a tool for this since it does include the time it takes to run a particular test in its output. However, in more realistic cases (especially when writing server-side code like for J2EE) you will see that you need some sort of a LoadDriver like Seque Silk, Mercury Interactive Load Runner (http://www.mercury.com/us/products/performance-center/loadrunner/). These tools each simulate a number of clients that will be simultaneously running through the system. You can use the tools to determine the average response time for any number of clients, for instance you would start with 1, then 5, then 10, then 20,... You should see a smooth curve that doesn't have a "knee" that comes too quickly. If you see a very steep increase, then you might suspect some underlying synchronization problems...
See also: OptimizationUnitTest
The idea is related, but it is NOT the same thing. A Performance test is centered around finding out "can the system do Y in X amount of time (perhaps under Z load)". UnitTests are critical to understanding that changes made to enhance performance don't break the system.
The question which I have run into is: How do you specify performance?
So, before we decide how to measure whether a unit/system is meeting a performance spec, how do we specify performance?
Can't see the problem, personally. It's not ideal (it's not entirely determined by the code you right), so you have to think a bit, but it's not like you have to think hard and invent something new
AlexisIglauer?
Some systems have "hard" performance specifications (usually driven by external hardware and interfaces) while most systems only have "soft" specifications. The best way to handle soft performance specifications is through iteration. You can use statistical sampling of a tightly specified test to show whether performance has been improved (i.e. run 20 samples before improvement, 20 samples after improvement, plot on an SPC control chart, and review the results). Release it to the user. If the user still complains, repeat the exercise (be sure to change the baseline test at this time). --WayneMack
Mercury Performance Center provides the first lifecycle approach to optimizing application performance. It�s the only solution that scales from project-based performance testing to a 24x7 globally accessible solution for Performance Centers of Excellence. It enables you to manage the risk of deploying mission-critical applications by telling you exactly what to expect once your application is in production. You can pinpoint bottlenecks ahead of time to help increase application performance. It also enables you to improve your infrastructure performance and ensure standardized best practices across your organization.
For more information on Mercury's products, check it out here: http://www.mercury.com/us/ http://www.mercury.com/us/products/performance-center/
Also see JMeter (http://jakarta.apache.org/jmeter/index.html) for an open-source solution.