Finite difference

[1][2][3] Finite differences were introduced by Brook Taylor in 1715 and have also been studied as abstract self-standing mathematical objects in works by George Boole (1860), L. M. Milne-Thomson (1933), and Károly Jordan [de] (1939).

Finite differences trace their origins back to one of Jost Bürgi's algorithms (c. 1592) and work by others including Isaac Newton.

[4] Three basic types are commonly considered: forward, backward, and central finite differences.

The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems.

If h has a fixed (non-zero) value instead of approaching zero, then the right-hand side of the above equation would be written

However, the central (also called centered) difference yields a more accurate approximation.

The main problem[citation needed] with the central difference method, however, is that oscillating functions can yield zero derivative.

[1][2][3] In an analogous way, one can obtain finite difference approximations to higher order derivatives and differential operators.

More generally, the n-th order forward, backward, and central differences are given by, respectively, These equations use binomial coefficients after the summation sign shown as (ni).

The integral representation for these types of series is interesting, because the integral can often be evaluated using asymptotic expansion or saddle-point techniques; by contrast, the forward difference series can be extremely hard to evaluate numerically, because the binomial coefficients grow rapidly for large n. The relationship of these higher-order differences with the respective derivatives is straightforward,

This can be proven by expanding the above expression in Taylor series, or by using the calculus of finite differences, explained below.

For a given polynomial of degree n ≥ 1, expressed in the function P(x), with real numbers a ≠ 0 and b and lower order terms (if any) marked as l.o.t.

Given the number of pairwise differences needed to reach the constant, it can be surmised this is a polynomial of degree 3.

The idea is to replace the derivatives appearing in the differential equation by finite differences that approximate them.

; the corresponding Newton series is identically zero, as all finite differences are zero in this case.

In this particular case, there is an assumption of unit steps for the changes in the values of x, h = 1 of the generalization below.

(following from it, and corresponding to the binomial theorem), are included in the observations that matured to the system of umbral calculus.

Newton series expansions can be superior to Taylor series expansions when applied to discrete quantities like quantum spins (see Holstein–Primakoff transformation), bosonic operator functions or discrete counting statistics.

[12] To illustrate how one may use Newton's formula in actual practice, consider the first few terms of doubling the Fibonacci sequence f = 2, 2, 4, ... One can find a polynomial that reproduces these values, by first computing a difference table, and then substituting the differences that correspond to x0 (underlined) into the formula as follows,

For the case of nonuniform steps in the values of x, Newton computes the divided differences,

Carlson's theorem provides necessary and sufficient conditions for a Newton series to be unique, if it exists.

In a compressed and slightly more general form and equidistant nodes the formula reads

The finite difference of higher orders can be defined in recursive manner as Δnh ≡ Δh(Δn − 1h).

Formally applying the Taylor series with respect to h, yields the operator equation

The expansion is valid when both sides act on analytic functions, for sufficiently small h; in the special case that the series of derivatives terminates (when the function operated on is a finite polynomial) the expression is exact, for all finite stepsizes, h .

This formula holds in the sense that both operators give the same result when applied to a polynomial.

For instance, retaining the first two terms of the series yields the second-order approximation to f ′(x) mentioned at the end of the section § Higher-order differences.

This remarkably systematic correspondence is due to the identity of the commutators of the umbral quantities to their continuum analogs (h → 0 limits),

For instance, the umbral analog of a monomial xn is a generalization of the above falling factorial (Pochhammer k-symbol),

hence the above Newton interpolation formula (by matching coefficients in the expansion of an arbitrary function f (x) in such symbols), and so on.

The three types of the finite differences. The central difference about x gives the best approximation of the derivative of the function at x .