In quantum computing, the Gottesman–Knill theorem is a theoretical result by Daniel Gottesman and Emanuel Knill that states that stabilizer circuits–circuits that only consist of gates from the normalizer of the qubit Pauli group, also called Clifford group–can be perfectly simulated in polynomial time on a probabilistic classical computer.
The reason for the speed up of quantum computers compared to classical ones is not yet fully understood[citation needed].
The Gottesman-Knill theorem proves that all quantum algorithms whose speed up relies on entanglement that can be achieved with CNOT and Hadamard gates do not achieve any computational advantage relative classical computers, due to the classical simulability of such algorithms (and the particular types of entangled states they can produce).
Since the theorem's initial statement, more efficient constructions for simulating such stabilizer (Clifford) circuits have been identified[1] with an implementation.
From a practical point of view, stabilizer circuits on n qubits can be simulated in O(n log n) time using the graph state formalism.