Root-finding algorithm

Thus root-finding algorithms can be used to solve any equation of continuous functions.

They require one or more initial guesses of the root as starting values, then each iteration of the algorithm produces a successively more accurate approximation to the root.

Since the iteration must be stopped at some point, these methods produce an approximation to the root, not an exact solution.

The behavior of general root-finding algorithms is studied in numerical analysis.

However, for polynomials specifically, the study of root-finding algorithms belongs to computer algebra, since algebraic properties of polynomials are fundamental for the most efficient algorithms.

The efficiency and applicability of an algorithm may depend sensitively on the characteristics of the given functions.

These generally use the intermediate value theorem, which asserts that if a continuous function has values of opposite signs at the end points of an interval, then the function has at least one root in the interval.

However, in the case of polynomials there are other methods such as Descartes' rule of signs, Budan's theorem and Sturm's theorem for bounding or determining the number of roots in an interval.

They lead to efficient algorithms for real-root isolation of polynomials, which find all real roots with a guaranteed accuracy.

Let f be a continuous function for which one knows an interval [a, b] such that f(a) and f(b) have opposite signs (a bracket).

Although the bisection method is robust, it gains one and only one bit of accuracy with each iteration.

Therefore, the number of function evaluations required for finding an ε-approximate root is

The false position method, also called the regula falsi method, is similar to the bisection method, but instead of using bisection search's middle of the interval it uses the x-intercept of the line that connects the plotted function values at the endpoints of the interval, that is False position is similar to the secant method, except that, instead of retaining the last two points, it makes sure to keep one point on either side of the root.

However, it may fail to converge in some naive implementations due to roundoff errors that may lead to a wrong sign for f(c).

Typically, this may occur if the derivative of f is large in the neighborhood of the root.

Interpolating two values yields a line: a polynomial of degree one.

Three values define a parabolic curve: a quadratic function.

The iteration stops when a fixed point of the auxiliary function is reached to the desired precision, i.e., when a new computed value is sufficiently close to the preceding ones.

Newton's method assumes the function f to have a continuous derivative.

Newton's method is also important because it readily generalizes to higher-dimensional problems.

This method does not require the computation (nor the existence) of a derivative, but the price is slower convergence (the order of convergence is the golden ratio, approximately 1.62[2]).

If we use a polynomial fit to remove the quadratic part of the finite difference used in the secant method, so that it better approximates the derivative, we obtain Steffensen's method, which has quadratic convergence, and whose behavior (both good and bad) is essentially the same as Newton's method but does not require a derivative.

Again, convergence is asymptotically faster than the secant method, but inverse quadratic interpolation often behaves poorly when the iterates are not close to the root.

This gives a robust and fast method, which therefore enjoys considerable popularity.

[3][4] At each iteration, the domain is partitioned into two parts, and the algorithm decides - based on a small number of function evaluations - which of these two parts must contain a root.

In one dimension, the criterion for decision is that the function has opposite signs.

The main challenge in extending the method to multiple dimensions is to find a criterion that can be computed easily and guarantees the existence of a root.

The Poincaré–Miranda theorem gives a criterion for the existence of a root in a rectangle, but it is hard to verify because it requires evaluating the function on the entire boundary of the rectangle.

[5][page needed] It says that, if the topological degree of a function f on a rectangle is non-zero, then the rectangle must contain at least one root of f. This criterion is the basis for several root-finding methods, such as those of Stenger[6] and Kearfott.

A fourth method uses an intermediate value theorem on simplices.