Test strategy

The purpose of a test strategy is to provide a rational deduction from organizational, high-level objectives to actual test activities to meet those objectives from a quality assurance perspective.

The creation and documentation of a test strategy should be done in a systematic way to ensure that all objectives are fully covered and understood by all stakeholders.

It should also frequently be reviewed, challenged and updated as the organization and the product evolve over time.

Design documents describe the functionality of the software to be enabled in the upcoming release.

They should also be reviewed by leads for all levels of testing to make sure the coverage is complete, yet not overlapping.

It also clearly informs the necessary OS patch levels and security updates required.

Sample risks are dependency of completion of coding done by sub-contractors, or capability of testing tools.

The testers should then re-test the failed test case until it is functioning correctly.

Planners should take into account the extra time needed to accommodate contingent issues.

One way to make this approximation is to look at the time needed by the previous releases of the software.

If the software is new, multiplying the initial testing schedule approximation by two is a good way to start.

Regression tests will reduce the likelihood that one fix creates some other problems in that program or in any other interface.

Unit-, integration- and system test cases are good candidates.

Remember also that non-functional testing (security, performance, usability) plays an important role in proving business continuation.

This data must be available to the test leader and the project manager, along with all the team members, in a central location.

The senior management may like to have test summary on a weekly or monthly basis.

This section must address what kind of test summary reports will be produced for the senior management along with the frequency.