In mathematics, a linear combination or superposition is an expression constructed from a set of terms by multiplying each term by a constant and adding the results (e.g. a linear combination of x and y would be any expression of the form ax + by, where a and b are constants).
If v1,...,vn are vectors and a1,...,an are scalars, then the linear combination of those vectors with those scalars as coefficients is There is some ambiguity in the use of the term "linear combination" as to whether it refers to the expression or to its value.
In most cases the value is emphasized, as in the assertion "the set of all linear combinations of v1,...,vn always forms a subspace".
In any case, even when viewed as expressions, all that matters about a linear combination is the coefficient of each vi; trivial modifications such as permuting the terms or adding terms with zero coefficient do not produce distinct linear combinations.
In that case, we often speak of a linear combination of the vectors v1,...,vn, with the coefficients unspecified (except that they must belong to K).
Finally, we may speak simply of a linear combination, where nothing is specified (except that the vectors must belong to V and the coefficients must belong to K); in this case one is probably referring to the expression, since every vector in V is certainly the value of some linear combination.
Note that by definition, a linear combination involves only finitely many vectors (except as described in the § Generalizations section.
Also, there is no reason that n cannot be zero; in that case, we declare by convention that the result of the linear combination is the zero vector in V. Let the field K be the set R of real numbers, and let the vector space V be the Euclidean space R3.
To see that this is so, take an arbitrary vector (a1,a2,a3) in R3, and write: Let K be the set C of all complex numbers, and let V be the set CC(R) of all continuous functions from the real line R to the complex plane C. Consider the vectors (functions) f and g defined by f(t) := eit and g(t) := e−it.
(Here, e is the base of the natural logarithm, about 2.71828..., and i is the imaginary unit, a square root of −1.)
This means that there would exist complex scalars a and b such that aeit + be−it = 3 for all real numbers t. Setting t = 0 and t = π gives the equations a + b = 3 and a + b = −3, and clearly this cannot happen.
Picking arbitrary coefficients a1, a2, and a3, we want Multiplying the polynomials out, this means and collecting like powers of x, we get Two polynomials are equal if and only if their corresponding coefficients are equal, so we can conclude This system of linear equations can easily be solved.
We write the span of S as span(S)[5][6] or sp(S): Suppose that, for some sets of vectors v1,...,vn, a single vector can be written in two different ways as a linear combination of them: This is equivalent, by subtracting these (
Similarly, we can speak of linear dependence or independence of an arbitrary set S of vectors.
Linear and affine combinations can be defined over any field (or ring), but conical and convex combination require a notion of "positive", and hence can only be defined over an ordered field (or ordered ring), generally the real numbers.
From this point of view, we can think of linear combinations as the most general sort of operation on a vector space – saying that a vector space is an algebra over the operad of linear combinations is precisely the statement that all possible algebraic operations in a vector space are linear combinations.
Ultimately, this fact lies at the heart of the usefulness of linear combinations in the study of vector spaces.
Such infinite linear combinations do not always make sense; we call them convergent when they do.
If K is a commutative ring instead of a field, then everything that has been said above about linear combinations generalizes to this case without change.