The superposition principle,[1] also known as superposition property, states that, for all linear systems, the net response caused by two or more stimuli is the sum of the responses that would have been caused by each stimulus individually.
By writing a very general stimulus (in a linear system) as the superposition of stimuli of a specific and simple form, often the response becomes easier to compute.
For example, in Fourier analysis, the stimulus is written as the superposition of infinitely many sinusoids.
Due to the superposition principle, each of these sinusoids can be analyzed separately, and its individual response can be computed.
(The response is itself a sinusoid, with the same frequency as the stimulus, but generally a different amplitude and phase.)
As another common example, in Green's function analysis, the stimulus is written as the superposition of infinitely many impulse functions, and the response is then a superposition of impulse responses.
As long as the superposition principle holds (which is often but not always; see nonlinear optics), the behavior of any light wave can be understood as a superposition of the behavior of these simpler plane waves.
For example, two waves traveling towards each other will pass right through each other without any distortion on the other side.
With regard to wave superposition, Richard Feynman wrote:[2] No-one has ever been able to define the difference between interference and diffraction satisfactorily.
It is just a question of usage, and there is no specific, important physical difference between them.
The best we can do, roughly speaking, is to say that when there are only a few sources, say two, interfering, then the result is usually called interference, but if there is a large number of them, it seems that the word diffraction is more often used.Other authors elaborate:[3] The difference is one of convenience and convention.
If the waves to be superposed originate from a few coherent sources, say, two, the effect is called interference.
On the other hand, if the waves to be superposed originate by subdividing a wavefront into infinitesimal coherent wavelets (sources), the effect is called diffraction.
That is the difference between the two phenomena is [a matter] of degree only, and basically, they are two limiting cases of superposition effects.Yet another source concurs:[4] In as much as the interference fringes observed by Young were the diffraction pattern of the double slit, this chapter [Fraunhofer diffraction] is, therefore, a continuation of Chapter 8 [Interference].
On the other hand, few opticians would regard the Michelson interferometer as an example of diffraction.
In other cases, such as in a line array, the summed variation will have a bigger amplitude than any of the components individually; this is called constructive interference.
In most realistic physical situations, the equation governing the wave is only approximately linear.
As a rule, the accuracy of the approximation tends to improve as the amplitude of the wave gets smaller.
In quantum mechanics, a principal task is to compute how a certain type of wave propagates and behaves.
A primary approach to computing the behavior of a wave function is to write it as a superposition (called "quantum superposition") of (possibly infinitely many) other wave functions of a certain type—stationary states whose behavior is particularly simple.
Since the Schrödinger equation is linear, the behavior of the original wave function can be computed through the superposition principle this way.
According to Dirac: "if the ket vector corresponding to a state is multiplied by any complex number, not zero, the resulting ket vector will correspond to the same state [italics in original].
For example, the Bloch sphere to represent pure state of a two-level quantum mechanical system (qubit) is also known as the Poincaré sphere representing different types of classical pure polarization states.
"[8] Though reasoning by Dirac includes atomicity of observation, which is valid, as for phase, they actually mean phase translation symmetry derived from time translation symmetry, which is also applicable to classical states, as shown above with classical polarization states.
A common type of boundary value problem is (to put it abstractly) finding a function y that satisfies some equation
For example, in Laplace's equation with Dirichlet boundary conditions, F would be the Laplacian operator in a region R, G would be an operator that restricts y to the boundary of R, and z would be the function that y is required to equal on the boundary of R. In the case that F and G are both linear operators, then the superposition principle says that a superposition of solutions to the first equation is another solution to the first equation:
However, the additive state decomposition can be applied to both linear and nonlinear systems.
The principle was rejected by Leonhard Euler and then by Joseph Lagrange.
Bernoulli argued that any sonorous body could vibrate in a series of simple modes with a well-defined frequency of oscillation.
In his reaction to Bernoulli's memoirs, Euler praised his colleague for having best developed the physical part of the problem of vibrating strings, but denied the generality and superiority of the multi-modes solution.