Jenkins–Traub algorithm

They gave two variants, one for general polynomials with complex coefficients, commonly known as the "CPOLY" algorithm, and a more complicated variant for the special case of polynomials with real coefficients, commonly known as the "RPOLY" algorithm.

The Jenkins–Traub algorithm has stimulated considerable research on theory and software for methods of this type.

The algorithm starts by checking the polynomial for the occurrence of very large or very small roots.

In the algorithm, proper roots are found one by one and generally in increasing size.

After each root is found, the polynomial is deflated by dividing off the corresponding linear factor.

The root-finding procedure has three stages that correspond to different variants of the inverse power iteration.

[4] Starting with the current polynomial P(X) of degree n, the aim is to compute the smallest root

The main idea of the Jenkins-Traub method is to incrementally improve the second factor.

Algorithmically, one would use long division by the linear factor as in the Horner scheme or Ruffini rule to evaluate the polynomials at

The shift for this stage is determined as some point close to the smallest root of the polynomial.

It is quasi-randomly located on the circle with the inner root radius, which in turn is estimated as the positive solution of the equation

Since the left side is a convex function and increases monotonically from zero to infinity, this equation is easy to solve, for instance by Newton's method.

This creates an asymmetry relative to the previous stage which increases the chance that the H polynomial moves towards the cofactor of a single root.

This limits the relative step size of the iteration, ensuring that the approximation sequence stays in the range of the smaller roots.

Typically one uses a number of 9 iterations for polynomials of moderate degree, with a doubling strategy for the case of multiple failures.

If all roots are different, then the Lagrange factors form a basis of the space of polynomials of degree at most n − 1.

By analysis of the recursion procedure one finds that the H polynomials have the coordinate representation

one gets the asymptotic estimates for All stages of the Jenkins–Traub complex algorithm may be represented as the linear algebra problem of determining the eigenvalues of a special matrix.

This matrix is the coordinate representation of a linear map in the n-dimensional space of polynomials of degree n − 1 or less.

the remaining factor of degree n − 1 as the eigenvector equation for the multiplication with the variable X, followed by remainder computation with divisor P(X),

The Jenkins–Traub algorithm described earlier works for polynomials with complex coefficients.

The same authors also created a three-stage algorithm for polynomials with real coefficients.

See Jenkins and Traub A Three-Stage Algorithm for Real Polynomials Using Quadratic Iteration.

[5] The algorithm finds either a linear or quadratic factor working completely in real arithmetic.

There is a surprising connection with the shifted QR algorithm for computing matrix eigenvalues.

[6] Again the shifts may be viewed as Newton-Raphson iteration on a sequence of rational functions converging to a first degree polynomial.

As predicted they enjoy faster than quadratic convergence for all distributions of zeros.

Wilkinson recommends that it is desirable for stable deflation that smaller zeros be computed first.

The second-stage shifts are chosen so that the zeros on the smaller half circle are found first.

After deflation the polynomial with the zeros on the half circle is known to be ill-conditioned if the degree is large; see Wilkinson,[10] p. 64.