Thus a conclusion may sometimes be reached at a much earlier stage than would be possible with more classical hypothesis testing or estimation, at consequently lower financial and/or human cost.
The method of sequential analysis is first attributed to Abraham Wald[1] with Jacob Wolfowitz, W. Allen Wallis, and Milton Friedman[2][3] while at Columbia University's Statistical Research Group as a tool for more efficient industrial quality control during World War II.
[5] A similar approach was independently developed from first principles at about the same time by Alan Turing, as part of the Banburismus technique used at Bletchley Park, to test hypotheses about whether different messages coded by German Enigma machines should be connected and analysed together.
Alternative ways to control the Type 1 error rate exist, such as the Haybittle–Peto bounds, and additional work on determining the boundaries for interim analyses has been done by O’Brien & Fleming[8] and Wang & Tsiatis.
[12] Step detection is the process of finding abrupt changes in the mean level of a time series or signal.
When the algorithms are run online as the data is coming in, especially with the aim of producing an alert, this is an application of sequential analysis.
Trials that are terminated early because they reject the null hypothesis typically overestimate the true effect size.