Orthogonal polynomials

The field of orthogonal polynomials developed in the late 19th century from a study of continued fractions by P. L. Chebyshev and was pursued by A.

They appear in a wide variety of fields: numerical analysis (quadrature rules), probability theory, representation theory (of Lie groups, quantum groups, and related objects), enumerative combinatorics, algebraic combinatorics, mathematical physics (the theory of random matrices, integrable systems, etc.

Some of the mathematicians who have worked on orthogonal polynomials include Gábor Szegő, Sergei Bernstein, Naum Akhiezer, Arthur Erdélyi, Yakov Geronimus, Wolfgang Hahn, Theodore Seio Chihara, Mourad Ismail, Waleed Al-Salam, Richard Askey, and Rehuel Lobatto.

Given any non-decreasing function α on the real numbers, we can define the Lebesgue–Stieltjes integral

This operation is a positive semidefinite inner product on the vector space of all polynomials, and is positive definite if the function α has an infinite number of points of growth.

Then the sequence (Pn)∞n=0 of orthogonal polynomials is defined by the relations

Usually the sequence is required to be orthonormal, namely,

is a non-negative function with support on some interval [x1, x2] in the real line (where x1 = −∞ and x2 = ∞ are allowed).

However, there are many examples of orthogonal polynomials where the measure dα(x) has points with non-zero measure where the function α is discontinuous, so cannot be given by a weight function W as above.

Sometimes the measure has finite support, in which case the family of orthogonal polynomials is finite, rather than an infinite sequence.

In some sense Krawtchouk should be on this list too, but they are a finite sequence.

These six families correspond to the NEF-QVFs and are martingale polynomials for certain Lévy processes.

One can also consider orthogonal polynomials for some curve in the complex plane.

The most important case (other than real intervals) is when the curve is the unit circle, giving orthogonal polynomials on the unit circle, such as the Rogers–Szegő polynomials.

They can sometimes be written in terms of Jacobi polynomials.

For example, Zernike polynomials are orthogonal on the unit disk.

The advantage of orthogonality between different orders of Hermite polynomials is applied to Generalized frequency division multiplexing (GFDM) structure.

More than one symbol can be carried in each grid of time-frequency lattice.

[2] Orthogonal polynomials of one variable defined by a non-negative measure on the real line have the following properties.

The orthogonal polynomials Pn can be expressed in terms of the moments as follows: where the constants cn are arbitrary (depend on the normalization of Pn).

This comes directly from applying the Gram–Schmidt process to the monomials, imposing each polynomial to be orthogonal with respect to the previous ones.

which can be seen to be consistent with the previously given expression with the determinant.

The polynomials Pn satisfy a recurrence relation of the form where An is not 0.

If the measure dα is supported on an interval [a, b], all the zeros of Pn lie in [a, b].

[citation needed] From the 1980s, with the work of X. G. Viennot, J. Labelle, Y.-N. Yeh, D. Foata, and others, combinatorial interpretations were found for all the classical orthogonal polynomials.

[3] The Macdonald polynomials are orthogonal polynomials in several variables, depending on the choice of an affine root system.

Multiple orthogonal polynomials are polynomials in one variable that are orthogonal with respect to a finite family of measures.

Including derivatives has big consequences for the polynomials, in general they no longer share some of the nice features of the classical orthogonal polynomials.