The method was developed from the 1970s to the 1990s by George Adomian, chair of the Center for Applied Mathematics at the University of Georgia.
[2] The aim of this method is towards a unified theory for the solution of partial differential equations (PDE); an aim which has been superseded by the more general theory of the homotopy analysis method.
[3] The crucial aspect of the method is employment of the "Adomian polynomials" which allow for solution convergence of the nonlinear portion of the equation, without simply linearizing the system.
An example of initial condition problem for an ordinary differential equation is the following: To solve the problem, the highest degree differential operator (written here as L) is put on the left side, in the following way: with L = d/dt and
If: and: Adomian’s polynomials to linearize the non-linear term can be obtained systematically by using the following rule: where:
However, in our example, the three constants appear grouped from the beginning in the form shown in the formal solution above.
After applying the two first boundary conditions we obtain the so-called Blasius series: To obtain γ we have to apply boundary conditions at ∞, which may be done by writing the series as a Padé approximant: where L = M. The limit at
If we choose b0 = 1, M linear equations for the b coefficients are obtained: Then, we obtain the a coefficients by means of the following sequence: In our example: Which when γ = 0.0408 becomes: with the limit: Which is approximately equal to 1 (from boundary condition (3)) with an accuracy of 4/1000.
One of the most frequent problems in physical sciences is to obtain the solution of a (linear or nonlinear) partial differential equation which satisfies a set of functional values on a rectangular boundary.
An example is the following problem: with the following boundary conditions defined on a rectangle: This kind of partial differential equation appears frequently coupled with others in science and engineering.
is the nth-order approximant to the solution and N u has been consistently expanded in Adomian polynomials: where
It is only a thumb-rule to order systematically the decomposition to be sure that all the combinations appearing are utilized sooner or later.
Cherruault established that the series terms obtained by Adomian's method approach zero as 1/(mn)!
An example to clarify this point is the solution of the Poisson problem with the following boundary conditions: By using Adomian's method and a symbolic processor (such as Mathematica or Maple) it is easy to obtain the third order approximant to the solution.
This approximant has an error lower than 5×10−16 in any point, as it can be proved by substitution in the initial problem and by displaying the absolute value of the residual obtained as a function of (x, y).
Some people are surprised by these results; it seems strange that not all initial-boundary conditions must be explicitly used to solve a differential system.
However, it is a well established fact that any elliptic equation has one and only one solution for any functional conditions in the four sides of a rectangle provided there is no discontinuity on the edges.
The cause of the misconception is that scientists and engineers normally think in a boundary condition in terms of weak convergence in a Hilbert space (the distance to the boundary function is small enough to practical purposes).
In contrast, Cauchy problems impose a point-to-point convergence to a given boundary function and to all its derivatives (and this is a quite strong condition!).
The commented Poisson problem does not have a solution for any functional boundary conditions f1, f2, g1, g2; however, given f1, f2 it is always possible to find boundary functions g1*, g2* so close to g1, g2 as desired (in the weak convergence meaning) for which the problem has solution.
The reader can convince himself (herself) of the high sensitivity of PDE solutions to small changes in the boundary conditions by solving this problem integrating along the x-direction, with boundary functions slightly different even though visually not distinguishable.
For instance, the solution with the boundary conditions: at x = 0 and x = 0.5, and the solution with the boundary conditions: at x = 0 and x = 0.5, produce lateral functions with different sign convexity even though both functions are visually not distinguishable.
Solutions of elliptic problems and other partial differential equations are highly sensitive to small changes in the boundary function imposed when only two sides are used.
And this sensitivity is not easily compatible with models that are supposed to represent real systems, which are described by means of measurements containing experimental errors and are normally expressed as initial-boundary value problems in a Hilbert space.
At least three methods have been reported [6] [7] [8] to obtain the boundary functions g1*, g2* that are compatible with any lateral set of conditions {f1, f2} imposed.
This makes it possible to find the analytical solution of any PDE boundary problem on a closed rectangle with the required accuracy, so allowing to solve a wide range of problems that the standard Adomian's method was not able to address.
The problem has been reduced, in this way, to the global minimization of the function F(c1, c2, ..., cN) which has a global minimum for some combination of the parameters ci, i = 1, ..., N. This minimum may be found by means of a genetic algorithm or by using some other optimization method, as the one proposed by Cherruault (1999).
[7] Finally, the third method proposed by García-Olivares is based on imposing analytic solutions at the four boundaries, but modifying the original differential operator in such a way that it is different from the original one only in a narrow region close to the boundaries, and it forces the solution to satisfy exactly analytic conditions at the four boundaries.
[8] The Adomian decomposition method may also be applied to linear and nonlinear integral equations to obtain solutions.
[10] The Adomian decomposition method for nonhomogenous Fredholm integral equation of the second kind goes as follows:[10] Given an integral equation of the form: We assume we may express the solution in series form: Plugging the series form into the integral equation then yields: Assuming that the sum converges absolutely to