As the scale varies, it is as if one is decreasing (as RG is a semi-group and doesn't have a well-defined inverse operation) the magnifying power of a notional microscope viewing the system.
[1] They became popular again at the end of the 19th century, perhaps the first example being the idea of enhanced viscosity of Osborne Reynolds, as a way to explain turbulence.
An early article[2] by Ernst Stueckelberg and André Petermann in 1953 anticipates the idea in quantum field theory.
Murray Gell-Mann and Francis E. Low restricted the idea to scale transformations in QED in 1954,[3] which are the most physically significant, and focused on asymptotic forms of the photon propagator at high energies.
They determined the variation of the electromagnetic coupling in QED, by appreciating the simplicity of the scaling structure of that theory.
On the basis of this (finite) group equation and its scaling property, Gell-Mann and Low could then focus on infinitesimal transformations, and invented a computational method based on a mathematical flow function ψ(g) = G d/(∂G/∂g) of the coupling parameter g, which they introduced.
Stueckelberg–Petermann and Gell-Mann–Low works) was confirmed 40 years later at the LEP accelerator experiments: the fine structure "constant" of QED was measured[6] to be about 1⁄127 at energies close to 200 GeV, as opposed to the standard low-energy physics value of 1⁄137.
[c] This problem of systematically handling the infinities of quantum field theory to obtain finite physical quantities was solved for QED by Richard Feynman, Julian Schwinger and Shin'ichirō Tomonaga, who received the 1965 Nobel prize for these contributions.
They effectively devised the theory of mass and charge renormalization, in which the infinity in the momentum scale is cut off by an ultra-large regulator, Λ.
[d] The dependence of physical quantities, such as the electric charge or electron mass, on the scale Λ is hidden, effectively swapped for the longer-distance scales at which the physical quantities are measured, and, as a result, all observable quantities end up being finite instead, even for an infinite Λ. Gell-Mann and Low thus realized in these results that, infinitesimally, while a tiny change in g is provided by the above RG equation given ψ(g), the self-similarity is expressed by the fact that ψ(g) depends explicitly only upon the parameter(s) of the theory, and not upon the scale μ. Consequently, the above renormalization group equation may be solved for (G and thus) g(μ).
A deeper understanding of the physical meaning and generalization of the renormalization process, which goes beyond the dilation group of conventional renormalizable theories, considers methods where widely different scales of lengths appear simultaneously.
It came from condensed matter physics: Leo P. Kadanoff's paper in 1966 proposed the "block-spin" renormalization group.
This approach covered the conceptual point and was given full computational substance in the extensive important work of Kenneth Wilson.
[e] Applications of the RG to particle physics exploded in number in the 1970s with the establishment of the Standard Model.
In 1973,[15][16] it was discovered that a theory of interacting colored quarks, called quantum chromodynamics, had a negative beta function.
Conversely, the coupling becomes weak at very high energies (asymptotic freedom), and the quarks become observable as point-like particles, in deep inelastic scattering, as anticipated by Feynman–Bjorken scaling.
[18] The top quark Yukawa coupling lies slightly below the infrared fixed point of the Standard Model suggesting the possibility of additional new physics, such as sequential heavy Higgs bosons.
This determines the space-time dimensionality of the string theory and enforces Einstein's equations of general relativity on the geometry.
[8] Consider a 2D solid, a set of atoms in a perfect square array, as depicted in the figure.
Further assume that, by some lucky coincidence, the physics of block variables is described by a formula of the same kind, but with different values for T and J: H(T′, J′).
Since the number of atoms in any real sample of material is very large, this is more or less equivalent to finding the long range behavior of the RG transformation which took (T,J) → (T′,J′) and (T′, J′) → (T", J").
To be more concrete, consider a magnetic system (e.g., the Ising model), in which the J coupling denotes the trend of neighbor spins to be aligned.
Also, most theories in condensed matter physics are approximately renormalizable, from superconductivity to fluid turbulence.
This fact is important as quantum triviality can be used to bound or even predict parameters such as the Higgs boson mass in asymptotic safety scenarios.
This coincidence of critical exponents for ostensibly quite different physical systems, called universality, is easily explained using the renormalization group, by demonstrating that the differences in phenomena among the individual fine-scale components are determined by irrelevant observables, while the relevant observables are shared in common.
Although virtual particles annihilate very quickly, during their short lives the electron will be attracted by the charge, and the positron will be repelled.
The condition that can be satisfied by (but not only by) Jacques Distler claimed without proof that this ERGE is not correct nonperturbatively.
Its large value for small momenta leads to a suppression of their contribution to the partition function which is effectively the same thing as neglecting large-scale fluctuations.
In other words, can be inverted to give Jk[φ] and we define the effective average action Γk as Hence, thus is the ERGE which is also known as the Wetterich equation.
: Using the above ansatz, it is possible to solve the renormalization group equation perturbatively and find the effective potential up to desired order.