Alternating series

In mathematics, an alternating series is an infinite series of terms that alternate between positive and negative signs.

The alternating series test guarantees that an alternating series is convergent if the terms an converge to 0 monotonically, but this condition is not necessary for convergence.

The alternating harmonic series has a finite sum but the harmonic series does not.

The Mercator series provides an analytic power series expression of the natural logarithm, given by

The functions sine and cosine used in trigonometry and introduced in elementary algebra as the ratio of sides of a right triangle can also be defined as alternating series in calculus.

When the alternating factor (–1)n is removed from these series one obtains the hyperbolic functions sinh and cosh used in calculus and statistics.

For integer or positive index α the Bessel function of the first kind may be defined with the alternating series

where Γ(z) is the gamma function.

If s is a complex number, the Dirichlet eta function is formed as an alternating series

that is used in analytic number theory.

The theorem known as the "Leibniz Test" or the alternating series test states that an alternating series will converge if the terms an converge to 0 monotonically.

Proof: Suppose the sequence

is monotonically decreasing, the terms

form a Cauchy sequence (i.e., the series satisfies the Cauchy criterion) and therefore they converge.

is approaching 0 monotonically, the estimate provides an error bound for approximating infinite sums by partial sums:

That does not mean that this estimate always finds the very first element after which error is less than the modulus of the next term in the series.

and try to find the term after which error is at most 0.00005, the inequality above shows that the partial sum up through

also gives an alternating series where the Leibniz test applies and thus makes this simple error bound not optimal.

This was improved by the Calabrese bound,[1] discovered in 1962, that says that this property allows for a result 2 times less than with the Leibniz error bound.

In fact this is also not optimal for series where this property applies 2 or more times, which is described by Johnsonbaugh error bound.

[2] If one can apply the property an infinite number of times, Euler's transform applies.

converges by the comparison test.

diverges, while the alternating version

converges by the alternating series test.

But the Riemann series theorem states that conditionally convergent series can be rearranged to create arbitrary convergence.

[4] Agnew's theorem describes rearrangements that preserve convergence for all convergent series.

The general principle is that addition of infinite sums is only commutative for absolutely convergent series.

For example, one false proof that 1=0 exploits the failure of associativity for infinite sums.

In practice, the numerical summation of an alternating series may be sped up using any one of a variety of series acceleration techniques.

One of the oldest techniques is that of Euler summation, and there are many modern techniques that can offer even more rapid convergence.