Linear multistep methods are used for the numerical solution of ordinary differential equations.
The process continues with subsequent steps to map out the solution.
Methods such as Runge–Kutta take some intermediate steps (for example, a half-step) to obtain a higher order method, but then discard all previous information before taking a second step.
Multistep methods attempt to gain efficiency by keeping and using the information from previous steps rather than discarding it.
Consequently, multistep methods refer to several previous points and derivative values.
Numerical methods for ordinary differential equations approximate solutions to initial value problems of the form
, then the method is called "explicit", since the formula can directly compute
The polynomial p is locally a good approximation of the right-hand side of the differential equation
This equation can be solved exactly; the solution is simply the integral of p. This suggests taking
by its interpolant p incurs an error of order hs, and it follows that the s-step Adams–Bashforth method has indeed order s (Iserles 1996, §2.1) The Adams–Bashforth methods were designed by John Couch Adams to solve a differential equation modelling capillary action due to Francis Bashforth.
Bashforth (1883) published his theory and Adams' numerical method (Goldstine 1977).
Again the b coefficients are chosen to obtain the highest order possible.
Adams used Newton's method to solve the implicit equation (Hairer, Nørsett & Wanner 1993, §III.1).
and the other coefficients chosen such that the method attains order s (the maximum possible).
More precisely, a multistep method is consistent if the local truncation error goes to zero faster than the step size h as h goes to zero, where the local truncation error is defined to be the difference between the result
A computation using Taylor series shows that a linear multistep method is consistent if and only if
All the methods mentioned above are consistent (Hairer, Nørsett & Wanner 1993, §III.2).
The numerical solution of a one-step method depends on the initial condition
, but the numerical solution of an s-step method depend on all the s starting values,
It is thus of interest whether the numerical solution is stable with respect to perturbations in the starting values.
A linear multistep method is zero-stable for a certain differential equation on a given time interval, if a perturbation in the starting values of size ε causes the numerical solution over that time interval to change by no more than Kε for some value of K which does not depend on the step size h. This is called "zero-stability" because it is enough to check the condition for the differential equation
A linear multistep method is zero-stable if and only if the root condition is satisfied (Süli & Mayers 2003, p. 335).
Now suppose that a consistent linear multistep method is applied to a sufficiently smooth differential equation and that the starting values
A multistep method applied to this differential equation with step size h yields a linear recurrence relation with characteristic polynomial
The method is said to be A-stable if it is absolutely stable for all hλ with negative real part.
These two results were proved by Germund Dahlquist and represent an important bound for the order of convergence and for the A-stability of a linear multistep method.
If the method is also explicit, then it cannot attain an order greater than q (Hairer, Nørsett & Wanner 1993, Thm III.3.5).
The second Dahlquist barrier states that no explicit linear multistep methods are A-stable.
Further, the maximal order of an (implicit) A-stable linear multistep method is 2.
Among the A-stable linear multistep methods of order 2, the trapezoidal rule has the smallest error constant (Dahlquist 1963, Thm 2.1 and 2.2).