Regularization (physics)

In physics, especially quantum field theory, regularization is a method of modifying observables which have singularities in order to make them finite by the introduction of a suitable parameter called the regulator.

It is distinct from renormalization, another technique to control infinities without assuming new physics, by adjusting for self-interaction feedback.

Regularization was for many decades controversial even amongst its inventors, as it combines physical and epistemological claims into the same equations.

Regularization procedures deal with infinite, divergent, and nonsensical expressions by introducing an auxiliary concept of a regulator (for example, the minimal distance

Regularization is the first step towards obtaining a completely finite and meaningful result; in quantum field theory it must be usually followed by a related, but independent technique called renormalization.

The existence of a limit as ε goes to zero and the independence of the final result from the regulator are nontrivial facts.

The underlying reason for them lies in universality as shown by Kenneth Wilson and Leo Kadanoff and the existence of a second order phase transition.

Anomalous theories have been studied in great detail and are often founded on the celebrated Atiyah–Singer index theorem or variations thereof (see, for example, the chiral anomaly).

The problem of infinities first arose in the classical electrodynamics of point particles in the 19th and early 20th century.

Regularization: Classical physics theory breaks down at small scales, e.g., the difference between an electron and a point particle shown above.

This is precisely the motivation behind string theory and other multi-dimensional models including multiple time dimensions.

Specific types of regularization procedures include Perturbative predictions by quantum field theory about quantum scattering of elementary particles, implied by a corresponding Lagrangian density, are computed using the Feynman rules, a regularization method to circumvent ultraviolet divergences so as to obtain finite results for Feynman diagrams containing loops, and a renormalization scheme.

These are independent of the particular regularization method used, and enable one to model perturbatively the measurable physical processes (cross sections, probability amplitudes, decay widths and lifetimes of excited states).

However, so far no known regularized n-point Green's functions can be regarded as being based on a physically realistic theory of quantum-scattering since the derivation of each disregards some of the basic tenets of conventional physics (e.g., by not being Lorentz-invariant, by introducing either unphysical particles with a negative metric or wrong statistics, or discrete space-time, or lowering the dimensionality of space-time, or some combination thereof).

So the available regularization methods are understood as formalistic technical devices, devoid of any direct physical meaning.

For a history and comments on this more than half-a-century old open conceptual problem, see e.g.[3][4][5] As it seems that the vertices of non-regularized Feynman series adequately describe interactions in quantum scattering, it is taken that their ultraviolet divergences are due to the asymptotic, high-energy behavior of the Feynman propagators.

This is the reasoning behind the formal Pauli–Villars covariant regularization by modification of Feynman propagators through auxiliary unphysical particles, cf.

In 1949 Pauli conjectured there is a realistic regularization, which is implied by a theory that respects all the established principles of contemporary physics.

By contrast, any present regularization method introduces formal coefficients that must eventually be disposed of by renormalization.

I am inclined to suspect that the renormalization theory is something that will not survive in the future,…"[8] He further observed that "One can distinguish between two main procedures for a theoretical physicist.

These long years of frustration have left in the subject a curious affection for the infinities and a passionate belief that they are an inevitable part of nature; so much so that even the suggestion of a hope that they may after all be circumvented - and finite values for the renormalization constants computed - is considered irrational.

"[10][11] However, in Gerard ’t Hooft’s opinion, "History tells us that if we hit upon some obstacle, even if it looks like a pure formality or just a technical complication, it should be carefully scrutinized.

"[8] According to Dirac, "Quantum electrodynamics is the domain of physics that we know most about, and presumably it will have to be put in order before we can hope to make any fundamental progress with other field theories, although these will continue to develop on the experimental basis.

[8][9] The path-integral formulation provides the most direct way from the Lagrangian density to the corresponding Feynman series in its Lorentz-invariant form.

According to Bjorken and Drell, it would make physical sense to sidestep ultraviolet divergences by using more detailed description than can be provided by differential field equations.

And Feynman noted about the use of differential equations: "... for neutron diffusion it is only an approximation that is good when the distance over which we are looking is large compared with the mean free path.

Feynman's preceding remark provides a possible physical reason for its existence; either that or it is just another way of saying the same thing (there is a fundamental unit of distance) but having no new information.

Infinities of the non-gravitational forces in QFT can be controlled via renormalization only but additional regularization - and hence new physics—is required uniquely for gravity.

A. Zee (Quantum Field Theory in a Nutshell, 2003) considers this to be a benefit of the regularization framework—theories can work well in their intended domains but also contain information about their own limitations and point clearly to where new physics is needed.