Kantorovich theorem

The Kantorovich theorem, or Newton–Kantorovich theorem, is a mathematical statement on the semi-local convergence of Newton's method.

It was first stated by Leonid Kantorovich in 1948.

[1][2] It is similar to the form of the Banach fixed-point theorem, although it states existence and uniqueness of a zero rather than a fixed point.

[3] Newton's method constructs a sequence of points that under certain conditions will converge to a solution

of an equation

or a vector solution of a system of equation

The Kantorovich theorem gives conditions on the initial point of this sequence.

If those conditions are satisfied then a solution exists close to the initial point and the sequence converges to that point.

be an open subset and

a differentiable function with a Jacobian

that is locally Lipschitz continuous (for instance if

there is an open subset

and there exists a constant

holds.

The norm on the left is the operator norm.

In other words, for any vector

the inequality must hold.

Now choose any initial point

is invertible and construct the Newton step

The next assumption is that not only the next point

but the entire ball

is contained inside the set

be the Lipschitz constant for the Jacobian over this ball (assuming it exists).

As a last preparation, construct recursively, as long as it is possible, the sequences

α

then A statement that is more precise but slightly more difficult to prove uses the roots

of the quadratic polynomial and their ratio Then In 1986, Yamamoto proved that the error evaluations of the Newton method such as Doring (1969), Ostrowski (1971, 1973),[6][7] Gragg-Tapia (1974), Potra-Ptak (1980),[8] Miel (1981),[9] Potra (1984),[10] can be derived from the Kantorovich theorem.

[11] There is a q-analog for the Kantorovich theorem.

[12][13] For other generalizations/variations, see Ortega & Rheinboldt (1970).

[14] Oishi and Tanabe claimed that the Kantorovich theorem can be applied to obtain reliable solutions of linear programming.