Andrew M. Gleason first proved the theorem in 1957,[1] answering a question posed by George W. Mackey, an accomplishment that was historically significant for the role it played in showing that wide classes of hidden-variable theories are inconsistent with quantum physics.
In the approach codified by John von Neumann, a measurement upon a physical system is represented by a self-adjoint operator on that Hilbert space sometimes termed an "observable".
[a] (Gleason's argument is inapplicable if, for example, one tries to construct an analogue of quantum mechanics using p-adic numbers.)
In 1932, John von Neumann also managed to derive the Born rule in his textbook Mathematical Foundations of Quantum Mechanics.
However, the assumptions on which von Neumann built his no hidden variables proof were rather strong and eventually regarded to not be well-motivated.
[14] Specifically, von Neumann assumed that the probability function must be linear on all observables, commuting or non-commuting.
[15][16] Gleason, on the other hand, did not assume linearity, but merely additivity for commuting projectors together with noncontextuality, assumptions seen as better motivated and more physically meaningful.
[16][17] By the late 1940s, George Mackey had grown interested in the mathematical foundations of quantum physics, wondering in particular whether the Born rule was the only possible rule for calculating probabilities in a theory that represented measurements as orthonormal bases on a Hilbert space.
[18][19] Mackey discussed this problem with Irving Segal at the University of Chicago, who in turn raised it with Richard Kadison, then a graduate student.
Kadison showed that for 2-dimensional Hilbert spaces there exists a probability measure that does not correspond to quantum states and the Born rule.
[b] Any such measure that can be written in the standard way, that is, by applying the Born rule to a quantum state, is termed a regular frame function.
Gleason derives a sequence of lemmas concerning when a frame function is necessarily regular, culminating in the final theorem.
[1]: fn 3 Robin Lyth Hudson described Gleason's theorem as "celebrated and notoriously difficult".
[23] Cooke, Keane and Moran later produced a proof that is longer than Gleason's but requires fewer prerequisites.
As Fuchs argues, the theorem "is an extremely powerful result", because "it indicates the extent to which the Born probability rule and even the state-space structure of density operators are dependent upon the theory's other postulates".
[24]: 94–95 Various approaches to rederiving the quantum formalism from alternative axioms have, accordingly, employed Gleason's theorem as a key step, bridging the gap between the structure of Hilbert space and the Born rule.
[c] Moreover, the theorem is historically significant for the role it played in ruling out the possibility of certain classes of hidden variables in quantum mechanics.
In a deterministic hidden-variable theory, there exists an underlying physical property that fixes the result found in the measurement.
[26]: §1.3 Gleason's theorem therefore suggests that quantum theory represents a deep and fundamental departure from the classical intuition that uncertainty is due to ignorance about hidden degrees of freedom.
The Kochen–Specker theorem refines this statement by constructing a specific finite subset of rays on which no such probability measure can be defined.
[27][30] The fact that such a finite subset of rays must exist follows from Gleason's theorem by way of a logical compactness argument, but this method does not construct the desired set explicitly.
[27][31][32] Pitowsky uses Gleason's theorem to argue that quantum mechanics represents a new theory of probability, one in which the structure of the space of possible events is modified from the classical, Boolean algebra thereof.
[4][5] The Gleason and Kochen–Specker theorems have been cited in support of various philosophies, including perspectivism, constructive empiricism and agential realism.
[33][34][35] Gleason's theorem finds application in quantum logic, which makes heavy use of lattice theory.
They are organized into a lattice, in which the distributive law, valid in classical logic, is weakened, to reflect the fact that in quantum physics, not all pairs of quantities can be measured simultaneously.
Assuming that the mapping from lattice elements to probabilities is noncontextual, Gleason's theorem establishes that it must be expressible with the Born rule.
Unlike the original theorem of Gleason, the generalized version using POVMs also applies to the case of a single qubit.
[40]: §3.D The original proof by Gleason was not constructive: one of the ideas on which it depends is the fact that every continuous function defined on a compact space attains its minimum.
[20][44] Gleason's theorem can be extended to some cases where the observables of the theory form a von Neumann algebra.
In essence, the only barrier to proving the theorem is the fact that Gleason's original result does not hold when the Hilbert space is that of a qubit.