Radial basis function

whose value depends only on the distance between the input and some fixed point, either the origin, so that

which forms a basis for some function space of interest, hence the name.

Sums of radial basis functions are typically used to approximate given functions.

This approximation process can also be interpreted as a simple kind of neural network; this was the context in which they were originally applied to machine learning, in work by David Broomhead and David Lowe in 1988,[1][2] which stemmed from Michael J. D. Powell's seminal research from 1977.

[3][4][5] RBFs are also used as a kernel in support vector classification.

[6] The technique has proven effective and flexible enough that radial basis functions are now applied in a variety of engineering applications.

When paired with a norm on a vector space

is said to be a radial kernel centered at

Commonly used types of radial basis functions include (writing

to indicate a shape parameter that can be used to scale the input of the radial kernel[11]): These radial basis functions are from

and are strictly positive definite functions[12] that require tuning a shape parameter

These RBFs are compactly supported and thus are non-zero only within a radius of

, and thus have sparse differentiation matrices Radial basis functions are typically used to build up function approximations of the form where the approximating function

radial basis functions, each associated with a different center

can be estimated using the matrix methods of linear least squares, because the approximating function is linear in the weights

Approximation schemes of this kind have been particularly used[citation needed] in time series prediction and control of nonlinear systems exhibiting sufficiently simple chaotic behaviour and 3D reconstruction in computer graphics (for example, hierarchical RBF and Pose Space Deformation).

The sum can also be interpreted as a rather simple single-layer type of artificial neural network called a radial basis function network, with the radial basis functions taking on the role of the activation functions of the network.

It can be shown that any continuous function on a compact interval can in principle be interpolated with arbitrary accuracy by a sum of this form, if a sufficiently large number

of radial basis functions is used.

is differentiable with respect to the weights

The weights could thus be learned using any of the standard iterative methods for neural networks.

Using radial basis functions in this manner yields a reasonable interpolation approach provided that the fitting set has been chosen such that it covers the entire range systematically (equidistant data points are ideal).

However, without a polynomial term that is orthogonal to the radial basis functions, estimates outside the fitting set tend to perform poorly.

[citation needed] Radial basis functions are used to approximate functions and so can be used to discretize and numerically solve Partial Differential Equations (PDEs).

This was first done in 1990 by E. J. Kansa who developed the first RBF based numerical method.

It is called the Kansa method and was used to solve the elliptic Poisson equation and the linear advection-diffusion equation.

The function values at points

are the number of points in the discretized domain,

the scalar coefficients that are unchanged by the differential operator.

[13] Different numerical methods based on Radial Basis Functions were developed thereafter.

Gaussian function for several choices of
Plot of the scaled bump function with several choices of
Two unnormalized Gaussian radial basis functions in one input dimension. The basis function centers are located at and .