In computational complexity theory, the parallel computation thesis is a hypothesis which states that the time used by a (reasonable) parallel machine is polynomially related to the space used by a sequential machine.
The parallel computation thesis was set forth by Chandra and Stockmeyer in 1976.
[1] In other words, for a computational model which allows computations to branch and run in parallel without bound, a formal language which is decidable under the model using no more than
steps for inputs of length n is decidable by a non-branching machine using no more than
units of storage for some constant k. Similarly, if a machine in the unbranching model decides a language using no more than
storage, a machine in the parallel model can decide the language in no more than
steps for some constant k. The parallel computation thesis is not a rigorous formal statement, as it does not clearly define what constitutes an acceptable parallel model.
N. Blum (1983) introduced a model for which the thesis does not hold.
[4] Chandra and Stockmeyer originally formalized and proved results related to the thesis for deterministic and alternating Turing machines, which is where the thesis originated.