Numerical methods for ordinary differential equations are methods used to find numerical approximations to the solutions of ordinary differential equations (ODEs).
For practical purposes, however – such as in engineering – a numeric approximation to the solution is often sufficient.
An alternative method is to use techniques from calculus to obtain a series expansion of the solution.
Ordinary differential equations occur in many scientific disciplines, including physics, chemistry, biology, and economics.
A first-order differential equation is an Initial value problem (IVP) of the form,[2] where
Without loss of generality to higher-order systems, we restrict ourselves to first-order differential equations, because a higher-order ODE can be converted into a larger system of first-order equations by introducing extra variables.
In this section, we describe numerical methods for IVPs, and remark that boundary value problems (BVPs) require a different set of tools.
A further division can be realized by dividing methods into those that are explicit and those that are implicit.
A loose rule of thumb dictates that stiff differential equations require the use of implicit schemes, whereas non-stiff problems can be solved more efficiently with explicit schemes.
The advantage of implicit methods such as (6) is that they are usually more stable for solving a stiff equation, meaning that a larger step size h can be used.
constant over the full interval: The Euler method is often not accurate enough.
Perhaps the simplest is the leapfrog method which is second order and (roughly speaking) relies on two time values.
This leads to the family of Runge–Kutta methods, named after Carl Runge and Martin Kutta.
A good implementation of one of these methods for solving an ODE entails more than the time-stepping formula.
It is often inefficient to use the same step size all the time, so variable step-size methods have been developed.
Other desirable features include: Many methods do not fall within the framework discussed here.
Some classes of alternative methods are: Some IVPs require integration at such high temporal resolution and/or over such long time intervals that classical serial time-stepping methods become computationally infeasible to run in real-time (e.g. IVPs in numerical weather prediction, plasma modelling, and molecular dynamics).
Parallel-in-time (PinT) methods have been developed in response to these issues in order to reduce simulation runtimes through the use of parallel computing.
Early PinT methods (the earliest being proposed in the 1960s)[20] were initially overlooked by researchers due to the fact that the parallel computing architectures that they required were not yet widely available.
With more computing power available, interest was renewed in the early 2000s with the development of Parareal, a flexible, easy-to-use PinT algorithm that is suitable for solving a wide variety of IVPs.
The advent of exascale computing has meant that PinT algorithms are attracting increasing research attention and are being developed in such a way that they can harness the world's most powerful supercomputers.
The most popular methods as of 2023 include Parareal, PFASST, ParaDiag, and MGRIT.
More precisely, we require that for every ODE (1) with a Lipschitz function f and every t* > 0, All the methods mentioned above are convergent.
This "difficult behaviour" in the equation (which may not necessarily be complex itself) is described as stiffness, and is often caused by the presence of different time scales in the underlying problem.
[23] For example, a collision in a mechanical system like in an impact oscillator typically occurs at much smaller time scale than the time for the motion of objects; this discrepancy makes for very "sharp turns" in the curves of the state parameters.
Stiff problems are ubiquitous in chemical kinetics, control theory, solid mechanics, weather forecasting, biology, plasma physics, and electronics.
[26][27] Boundary value problems (BVPs) are usually solved numerically by solving an approximately equivalent matrix problem obtained by discretizing the original BVP.
[3] This method takes advantage of linear combinations of point values to construct finite difference coefficients that describe derivatives of the function.
One then constructs a linear system that can then be solved by standard matrix methods.
For example, suppose the equation to be solved is: The next step would be to discretize the problem and use linear derivative approximations such as and solve the resulting system of linear equations.