In numerical analysis, the Newton–Cotes formulas, also called the Newton–Cotes quadrature rules or simply Newton–Cotes rules, are a group of formulas for numerical integration (also called quadrature) based on evaluating the integrand at equally spaced points.
They are named after Isaac Newton and Roger Cotes.
Newton–Cotes formulas can be useful if the value of the integrand at equally spaced points is given.
If it is possible to change the points at which the integrand is evaluated, then other methods such as Gaussian quadrature and Clenshaw–Curtis quadrature are probably more suitable.
It is assumed that the value of a function f defined on
equally spaced points:
There are two classes of Newton–Cotes quadrature: they are called "closed" when
, i.e. they use the function values at the interval endpoints, and "open" when
, i.e. they do not use the function values at the endpoints.
points can be defined (for both classes) as[1]
where The number h is called step size,
The weights can be computed as the integral of Lagrange basis polynomials.
be the interpolation polynomial in the Lagrange form for the given data points
A Newton–Cotes formula of any degree n can be constructed.
However, for large n a Newton–Cotes rule can sometimes suffer from catastrophic Runge's phenomenon[2] where the error grows exponentially for large n. Methods such as Gaussian quadrature and Clenshaw–Curtis quadrature with unequally spaced points (clustered at the endpoints of the integration interval) are stable and much more accurate, and are normally preferred to Newton–Cotes.
If these methods cannot be used, because the integrand is only given at the fixed equidistributed grid, then Runge's phenomenon can be avoided by using a composite rule, as explained below.
Alternatively, stable Newton–Cotes formulas can be constructed using least-squares approximation instead of interpolation.
This allows building numerically stable formulas even for high degrees.
[3][4] This table lists some of the Newton–Cotes formulas of the closed type.
Boole's rule is sometimes mistakenly called Bode's rule, as a result of the propagation of a typographical error in Abramowitz and Stegun, an early reference book.
[5] The exponent of the step size h in the error term gives the rate at which the approximation error decreases.
The order of the derivative of f in the error term gives the lowest degree of a polynomial which can no longer be integrated exactly (i.e. with error equal to zero) with this rule.
must be taken from the interval (a,b), therefore, the error bound is equal to the error term when
This table lists some of the Newton–Cotes formulas of the open type.
For the Newton–Cotes rules to be accurate, the step size h needs to be small, which means that the interval of integration
must be small itself, which is not true most of the time.
For this reason, one usually performs numerical integration by splitting
into smaller subintervals, applying a Newton–Cotes rule on each subinterval, and adding up the results.
This is called a composite rule.
See Numerical integration.