In mathematics, quasilinearization is a technique which replaces a nonlinear differential equation or operator equation (or system of such equations) with a sequence of linear problems, which are presumed to be easier, and whose solutions approximate the solution of the original nonlinear problem with increasing accuracy.
It is a generalization of Newton's method; the word "quasilinearization" is commonly used when the differential equation is a boundary value problem.
[1][2] Quasilinearization replaces a given nonlinear operator N with a certain linear operator which, being simpler, can be used in an iterative fashion to approximately solve equations containing the original nonlinear operator.
For quasilinearization to work, the reference solution needs to exist uniquely (at least locally).
The process starts with an initial approximation y0 that satisfies the boundary conditions and is "sufficiently close" to the reference solution y in a sense to be defined more precisely later.
The first step is to take the Fréchet derivative of the nonlinear operator N at that initial approximation, in order to find the linear operator L(y0) which best approximates N(y)-N(y0) locally.
Setting this equation to zero and imposing zero boundary conditions and ignoring higher-order terms gives the linear equation L(yk)( y - yk ) = - N(yk).
The solution of this linear equation (with zero boundary conditions) might be called yk+1.
Computation of yk for k=1, 2, 3,... by solving these linear equations in sequence is analogous to Newton's iteration for a single equation, and requires recomputation of the Fréchet derivative at each yk.
The process can converge quadratically to the reference solution, under the right conditions.
Just as with Newton's method for nonlinear algebraic equations, however, difficulties may arise: for instance, the original nonlinear equation may have no solution, or more than one solution, or a multiple solution, in which cases the iteration may converge only very slowly, may not converge at all, or may converge instead to the wrong solution.
The practical test of the meaning of the phrase "sufficiently close" earlier is precisely that the iteration converges to the correct solution.
Just as in the case of Newton iteration, there are theorems stating conditions under which one can know ahead of time when the initial approximation is "sufficiently close".
One could instead discretize the original nonlinear operator and generate a (typically large) set of nonlinear algebraic equations for the unknowns, and then use Newton's method proper on this system of equations.
Generally speaking, the convergence behavior is similar: a similarly good initial approximation will produce similarly good approximate discrete solutions.
However, the quasilinearization approach (linearizing the operator equation instead of the discretized equations) seems to be simpler to think about, and has allowed such techniques as adaptive spatial meshes to be used as the iteration proceeds.
[3] As an example to illustrate the process of quasilinearization, we can approximately solve the two-point boundary value problem for the nonlinear node
The exact solution of the differential equation can be expressed using the Weierstrass elliptic function ℘, like so:
where the vertical bar notation means that the invariants are
so that the boundary conditions are satisfied requires solving two simultaneous nonlinear equations for the two unknowns
This can be done, in an environment where ℘ and its derivatives are available, for instance by Newton's method.
[a] Applying the technique of quasilinearization instead, one finds by taking the Fréchet derivative at an unknown approximation
, then the first iteration (at least) can be solved exactly, but is already somewhat complicated.
A numerical solution instead, for instance by a Chebyshev spectral method using
give other continuous solutions to this nonlinear two-point boundary-value problem for ODE, such as
The solution corresponding to these values plotted in the figure is called
Yet other values of the parameters can give discontinuous solutions because ℘ has a double pole at zero and so
Finding other continuous solutions by quasilinearization requires different initial approximations to the ones used here.
and can be used to generate a sequence of approximations converging to
Both approximations are plotted in the accompanying figure.