Steffensen's method achieves a quadratic order of convergence without using derivatives, whereas Newton's method converges quadratically but requires derivatives and the secant method does not require derivatives but also converges less quickly than quadratically.
[a] Steffensen's method can be derived as an adaptation of Aitken's delta-squared process applied to fixed-point iteration.
Viewed in this way, Steffensen's method naturally generalizes to efficient fixed-point calculation in general Banach spaces, whenever fixed points are guaranteed to exist and fixed-point iteration is guaranteed to converge, although possibly slowly, by the Banach fixed-point theorem.
The simplest form of the formula for Steffensen's method occurs when it is used to find a zero of a real function
For some functions, Steffensen's method can work even if this condition is not met, but in such a case, the starting value
Adjustment of the size of the method's intermediate step, mentioned later, can improve convergence in some of these cases.
between those two points (it is either a forward-type or backward-type divided difference, depending on the sign of
Although slight non-conformance may not necessarily be dire, any large departure from the condition warns that Steffensen's method is liable to fail, and temporary use of some fallback algorithm is warranted (e.g. the more robust Illinois algorithm, or plain regula falsi).
must be an adequate correction to get closer to its own solution, and for that reason fulfill the requirement that
For all other parts of the calculation, Steffensen's method only requires the function
In this case, quickly means that for both methods, the number of correct digits in the answer doubles with each step.
But the formula for Newton's method requires evaluation of the function's derivative
The price for the quick convergence is the double function evaluation: Both
For comparison, the secant method needs only one function evaluation per step.
Similar to most other iterative root-finding algorithms, the crucial weakness in Steffensen's method is choosing an "adequate" starting value
The version of Steffensen's method implemented in the MATLAB code shown below can be found using Aitken's delta-squared process for convergence acceleration.
gives: which results in the more rapidly convergent sequence: Here is the source for an implementation of Steffensen's Method in MATLAB.
Steffensen's method accelerates this convergence, to make it quadratic.
All issues of a more general Banach space vs. basic real numbers being momentarily ignored for the sake of the comparison.
The generalized method assumes that a family of bounded linear operators
can be devised that (locally) satisfies the condition[2] If division is possible in the Banach space, then the linear operator
can be obtained from which may provide some insight: Expressed in this way, the linear operator
The quotient form is shown here for orientation only; it is not required per se.
Note also that division within the Banach space is not necessary for the elaborated Steffensen's method to be viable; the only requirement is that the operator
, given in the first section, the function simply takes in and puts out real numbers.
is roughly equivalent to a matrix whose entries are all functions of vector arguments
In the case that division is possible in the Banach space, the generalized iteration formula is given by for
In the more general case in which division may not be possible, the iteration formula requires finding a solution
to the somewhat reduced form with all the values inside square brackets being independent of
However, that the second form may not be as numerically stable as the first: because the first form involves finding a value for a (hopefully) small difference, it may be numerically more likely to avoid excessively large or erratic changes to the iterated value