Separation of variables

as: So now as long as h(y) ≠ 0, we can rearrange terms to obtain: where the two variables x and y have been separated.

Note dx (and dy) can be viewed, at a simple level, as just a convenient notation, which provides a handy mnemonic aid for assisting with manipulations.

A formal definition of dx as a differential (infinitesimal) is somewhat advanced.

Those who dislike Leibniz's notation may prefer to write this as but that fails to make it quite as obvious why this is called "separation of variables".

If one can evaluate the two integrals, one can find a solution to the differential equation.

This allows us to solve separable differential equations more conveniently, as demonstrated in the example below.

Population growth is often modeled by the "logistic" differential equation where

Consider the separable first-order ODE: The derivative can alternatively be written the following way to underscore that it is an operator working on the unknown function, y: Thus, when one separates variables for first-order equations, one in fact moves the dx denominator of the operator to the side with the x variable, and the d(y) is left on the side with the y variable.

Thus, much like a first-order separable ODE is reducible to the form a separable second-order ODE is reducible to the form and an nth-order separable ODE is reducible to Consider the simple nonlinear second-order differential equation:

This is now a simple integral problem that gives the final answer:

The analytical method of separation of variables for solving partial differential equations has also been generalized into a computational method of decomposition in invariant structures that can be used to solve systems of partial differential equations.

The boundary condition is homogeneous, that is Let us attempt to find a solution which is not identically zero satisfying the boundary conditions but with the following property: u is a product in which the dependence of u on x, t is separated, that is: Substituting u back into equation (1) and using the product rule, Since the right hand side depends only on x and the left hand side only on t, both sides are equal to some constant value −λ.

Then there exist real numbers B, C such that From (7) we conclude in the same manner as in 1 that u is identically 0.

Then there exist real numbers A, B, C such that and From (7) we get C = 0 and that for some positive integer n, This solves the heat equation in the special case that the dependence of u has the special form of (3).

Hence a complete solution can be given as where Dn are coefficients determined by initial condition.

Given the initial condition we can get This is the sine series expansion of f(x) which is amenable to Fourier analysis.

Expand h(x,t), u(x,t) and f(x) into where hn(t) and bn can be calculated by integration, while un(t) is to be determined.

Substitute (9) and (10) back to (8) and considering the orthogonality of sine functions we get which are a sequence of linear differential equations that can be readily solved with, for instance, Laplace transform, or Integrating factor.

Finally, we can get If the boundary condition is nonhomogeneous, then the expansion of (9) and (10) is no longer valid.

One has to find a function v that satisfies the boundary condition only, and subtract it from u.

The function u-v then satisfies homogeneous boundary condition, and can be solved with the above method.

[3] Below is an outline of an argument demonstrating the applicability of the method to certain linear equations, although the precise method may differ in individual cases (for instance in the biharmonic equation above).

, which gives two ordinary differential equations which we can recognize as eigenvalue problems for the operators for

along with the relevant boundary conditions, then by the Spectral theorem there exists a basis for

Hence, the spectral theorem ensures that the separation of variables will (when it is possible) find all the solutions.

While these operators may not be compact, their inverses (when they exist) may be, as in the case of the wave equation, and these inverses have the same eigenfunctions and eigenvalues as the original operator (with the possible exception of zero).

[4] The matrix form of the separation of variables is the Kronecker sum.

are 1D discrete Laplacians in the x- and y-directions, correspondingly, and

See the main article Kronecker sum of discrete Laplacians for details.

Some mathematical programs are able to do separation of variables: Xcas[5] among others.