Software reliability testing

Using the following formula, the probability of failure is calculated by testing a sample of all available input states.

Time constraints are handled by applying fixed dates or deadlines for the tests to be performed.

If the focus is on calendar time (i.e. if there are predefined deadlines), then intensified stress testing is used.

[2][4] Software availability is measured in terms of mean time between failures (MTBF).

[6] Steady state availability represents the percentage the software is operational.

[7] There are many software reliability growth models (SRGM) (List of software reliability models) including, logarithmic, polynomial, exponential, power, and S-shaped The main objective of the reliability testing is to test software performance under given conditions without any type of corrective measure using known fixed procedures considering its specifications.

The secondary objectives of reliability testing is: Some restrictions on creating objectives include: The application of computer software has crossed into many different fields, with software being an essential part of industrial, commercial and military systems.

Because of its many applications in safety critical systems, software reliability is now an important research area.

Although software engineering is becoming the fastest developing technology of the last century, there is no complete, scientific, quantitative measure to assess them.

[11] This test is conducted to check the performance of the software under maximum work load.

For example, a web site can be tested to see how many simultaneous users it can support without performance degradation.

Regression testing is conducted after every change or update in the software features.

This testing is periodic, depending on the length and features of the software.

Some common problems that occur when designing test cases include: Studies during development and design of software help for improving the reliability of a product.

Reliability testing is essentially performed to eliminate the failure mode of the software.

[12] This testing is used to check new prototypes of the software which are initially supposed to fail frequently.

n(T) is number of failure from start to time T. The graph drawn for n(T)/T is a straight line.

If the value of alpha in the equation is zero the reliability can not be improved as expected for given number of failure.

This explains that number of the failures doesn't depends on test lengths.

If new features are being added to the current version of software, then writing a test case for that operation is done differently.

There is a predefined rule to calculate count of new test cases for the software.

The main problem with this type of evaluation is constructing such an operational environment.

During operation of the software, any data about its failure is stored in statistical form and is given as input to the reliability growth model.