The first complete real-root isolation algorithm results from Sturm's theorem (1829).
Since the beginning of 20th century there has been much research activity for improving the algorithms derived from Descartes' rule of signs, getting very efficient implementations, and determining their computational complexities.
It follows that, except maybe for very low degrees, a root-isolation procedure cannot give reliable results without using exact arithmetic.
The second reason for considering only square-free polynomials is that the fastest root-isolation algorithms do not work in the case of multiple roots.
Otherwise, it is kept in the working list for further divisions, and the process may continue until all roots are eventually isolated The first complete root-isolation procedure results of Sturm's theorem (1829), which expresses the number of real roots in an interval in terms of the number of sign variations of the values of a sequence of polynomials, called Sturm's sequence, at the ends of the interval.
When implemented on computers, it appeared that root isolation with Sturm's theorem is less efficient than the other methods that are described below.
[3] Consequently, Sturm's theorem is rarely used for effective computations, although it remains useful for theoretical purposes.
This has been generalized by Budan's theorem (1807), into a similar result for the real roots in a half-open interval (a, b]: If f(x) is a polynomial, and v is the difference between of the numbers of sign variations of the sequences of the coefficients of f(x + a) and f(x + b), then v minus the number of real roots in the interval, counted with their multiplicities, is a nonnegative even integer.
Vincent's theorem (1834)[4] provides a method for real-root isolation, which is at the basis of the most efficient real-root-isolation algorithms.
is a sequence of positive real numbers, let be the kth convergent of the continued fraction Vincent's theorem — Let
The "small enough" condition has been quantified independently by Nikola Obreshkov,[5] and Alexander Ostrowski:[6] Theorem (Obreschkoff–Ostrowski) — The conclusion of Vincent's auxiliary result holds if the polynomial p(x) has at most one root α + iβ such that In particular the conclusion holds if where sep(p) is the minimal distance between two roots of p. For polynomials with integer coefficients, the minimum distance sep(p) may be lower bounded in terms of the degree of the polynomial and the maximal absolute value of its coefficients; see Properties of polynomial roots § Root separation.
However, Obreschkoff–Ostrowski theorem shows that the number of iterations of these algorithms depend on the distances between roots in the neighborhood of the working interval; therefore, the number of iterations may vary dramatically for different roots of the same polynomial.
James V. Uspensky gave a bound on the length of the continued fraction (the integer k in Vincent's theorem), for getting zero or one sign variations:[1][7] Theorem (Uspensky) — Let p(x) be a polynomial of degree n, and sep(p) be the minimal distance between two roots of p. Let Then the integer k, whose existence is asserted in Vincent's theorem, is not greater than the smallest integer h such that where
The use of continued fractions for real-root isolation has been introduced by Vincent, although he credited Joseph-Louis Lagrange for this idea, without providing a reference.
For running this algorithm one must work with a list of intervals represented by a specific data structure.
For isolating the real roots of a polynomial p(x) of degree n, each interval is represented by a pair
corresponding to the partition of the reals into the positive and the negative ones (one may suppose that zero is not a root, as, if it were, it suffices to apply the algorithm to p(x)/x).
In pseudocode, this gives the following, where var(A) denotes the number of sign variations of the coefficients of the polynomial A.
A way for improving the efficiency of the algorithm is to take for b a lower bound of the positive real roots, computed from the coefficients of the polynomial (see Properties of polynomial roots for such bounds).
[8][1] The bisection method consists roughly of starting from an interval containing all real roots of a polynomial, and divides it recursively into two parts until getting eventually intervals that contain either zero or one root.
For technical reasons (simpler changes of variable, simpler complexity analysis, possibility of taking advantage of the binary analysis of computers), the algorithms are generally presented as starting with the interval [0, 1].
There is no loss of generality, as the changes of variables x = By and x = –By move respectively the positive and the negative roots in the interval [0, 1].
Descartes' rule of signs applied to the polynomial r gives indications on the number of real roots of q in the interval [0, 1], and thus on the number of roots of the initial polynomial in the interval that has been mapped on [0, 1].
If there is no sign variation in the sequence of the coefficients of r, then there is no real root in the considered intervals.
As most of the computing time is devoted to changes of variable, the method consisting of mapping every interval to [0, 1] is fundamental for insuring a good efficiency.
[3] The running time depends mainly on the number of intervals that have to be considered, and on the changes of variables.
There are ways for improving the efficiency, which have been an active subject of research since the publication of the algorithm, and mainly since the beginning of the 21st century.
They include a method for avoiding storing a long list of polynomials without losing the simplicity of the changes of variables,[9] the use of approximate arithmetic (floating point and interval arithmetic) when it allows getting the right value for the number of sign variations,[9] the use of Newton's method when possible,[9] the use of fast polynomial arithmetic,[10] shortcuts for long chains of bisections in case of clusters of close roots,[10] bisections in unequal parts for limiting instability problems in polynomial evaluation.
[10] All these improvement lead to an algorithm for isolating all real roots of a polynomial with integer coefficients, which has the complexity (using soft O notation, Õ, for omitting logarithmic factors) where n is the degree of the polynomial, k is the number of nonzero terms, t is the maximum of digits of the coefficients.
[10] The implementation of this algorithm appears to be more efficient than any other implemented method for computing the real roots of a polynomial, even in the case of polynomials having very close roots (the case which was previously the most difficult for the bisection method).