[1] It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Also important, but often overlooked is performance degradation, i.e. to ensure that the throughput and/or response times after some long period of sustained activity are as good as or better than at the beginning of the test.
Spike testing is done by suddenly increasing or decreasing the load generated by a very large number of users, and observing the behavior of the system.
The work-flow of a scripted transaction may impact true concurrency especially if the iterative part contains the log-in and log-out activity.
If the system has no concept of end-users, then performance goal is likely to be based on a maximum throughput or transaction rate.
It is sometimes a difficult task to identify which part of the system represents this critical path, and some test tools include (or can have add-ons that provide) instrumentation that runs on the server (agents) and reports transaction times, database access times, network overhead, and other server monitors, which can be analyzed together with the raw performance statistics.
If there can also be a statement of what constitutes the maximum allowable 95 percentile response time, then an injector configuration could be used to test whether the proposed system met that specification.
Performance specifications should ask the following questions, at a minimum: A stable build of the system which must resemble the production environment as closely as is possible.
To truly replicate production-like states, enterprise services or assets that share a common infrastructure or platform require coordinated performance testing, with all consumers creating production-like transaction volumes and load on shared infrastructures or platforms.
Because this activity is so complex and costly in money and time, some organizations now use tools to monitor and simulate production-like conditions (also referred as "noise") in their performance testing environments (PTE) to understand capacity and resource requirements and verify / validate quality attributes.
Each of the tools mentioned in the above list (which is not exhaustive nor complete) either employs a scripting language (C, Java, JS) or some form of visual representation (drag and drop) to create and simulate end user work flows.
To determine the exact root cause of the issue, software engineers use tools such as profilers to measure what parts of a device or software contribute most to the poor performance, or to establish throughput levels (and thresholds) for maintained acceptable response time.
Performance testing technology employs one or more PCs or Unix servers to act as injectors, each emulating the presence of numbers of users and each running an automated sequence of interactions (recorded as a script, or as a series of scripts to emulate different types of user interaction) with the host whose performance is being tested.
Usually, a separate PC acts as a test conductor, coordinating and gathering metrics from each of the injectors and collating performance data for reporting purposes.
The usual sequence is to ramp up the load: to start with a few virtual users and increase the number over time to a predetermined maximum.
The test result shows how the performance varies with the load, given as number of users vs. response time.
Tools in this category usually execute a suite of tests which emulate real users against the system.
Sometimes the results can reveal oddities, e.g., that while the average response time might be acceptable, there are outliers of a few key transactions that take considerably longer to complete – something that might be caused by inefficient database queries, pictures, etc.
Analytical performance modeling allows evaluation of design options and system sizing based on actual or anticipated business use.