It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and infinite series using integrals and the machinery of calculus.
The formula was discovered independently by Leonhard Euler and Colin Maclaurin around 1735.
Euler needed it to compute slowly converging infinite series while Maclaurin used it to calculate integrals.
The Euler–Maclaurin formula provides expressions for the difference between the sum and the integral in terms of the higher derivatives f(k)(x) evaluated at the endpoints of the interval, that is to say x = m and x = n. Explicitly, for p a positive integer and a function f(x) that is p times continuously differentiable on the interval [m,n], we have
where Bk is the kth Bernoulli number (with B1 = 1/2) and Rp is an error term which depends on n, m, p, and f and is usually small for suitable values of p. The formula is often written with the subscript taking only even values, since the odd Bernoulli numbers are zero except for B1.
The remainder term arises because the integral is usually not exactly equal to the sum.
The formula may be derived by applying repeated integration by parts to successive intervals [r, r + 1] for r = m, m + 1, …, n − 1.
where ⌊x⌋ denotes the largest integer less than or equal to x, so that x − ⌊x⌋ always lies in the interval [0,1).
where ζ denotes the Riemann zeta function; one approach to prove this inequality is to obtain the Fourier series for the polynomials Bk(x).
The term ζ(k) may be omitted for odd k but the proof in this case is more complex (see Lehmer).
Euler computed this sum to 20 decimal places with only a few terms of the Euler–Maclaurin formula in 1735.
This probably convinced him that the sum equals π2/6, which he proved in the same year.
[4] If f is a polynomial and p is big enough, then the remainder term vanishes.
Fix N, the number of points to use in the approximation, and denote the corresponding step size by h = b − a/N − 1.
This may be viewed as an extension of the trapezoid rule by the inclusion of correction terms.
Note that this asymptotic expansion is usually not convergent; there is some p, depending upon f and h, such that the terms past order p increase rapidly.
Thus, the remainder term generally demands close attention.
[5] The Euler–Maclaurin formula is also used for detailed error analysis in numerical quadrature.
It explains the superior performance of the trapezoidal rule on smooth periodic functions and is used in certain extrapolation methods.
Clenshaw–Curtis quadrature is essentially a change of variables to cast an arbitrary integral in terms of integrals of periodic functions where the Euler–Maclaurin approach is very accurate (in that particular case the Euler–Maclaurin formula takes the form of a discrete cosine transform).
In the context of computing asymptotic expansions of sums and series, usually the most useful form of the Euler–Maclaurin formula is
[6] Often the expansion remains valid even after taking the limits a → −∞ or b → +∞ or both.
In many cases the integral on the right-hand side can be evaluated in closed form in terms of elementary functions even though the sum on the left-hand side cannot.
Here the left-hand side is equal to ψ(1)(z), namely the first-order polygamma function defined by the gamma function Γ(z) is equal to (z − 1)!
That expansion, in turn, serves as the starting point for one of the derivations of precise error estimates for Stirling's approximation of the factorial function.
Collecting the constants into a value of the Riemann zeta function, we can write an asymptotic expansion:
When s = 1, the corresponding technique gives an asymptotic expansion for the harmonic numbers:
To continue the induction, we apply integration by parts to the error term:
Summing from k = 0 to k = n − 1 and substituting this for the lower order error term results in the p = 2 case of the formula,
In this way we get a proof of the Euler–Maclaurin summation formula which can be formalized by mathematical induction, in which the induction step relies on integration by parts and on identities for periodic Bernoulli functions.