Resultant

It is used, among others, for cylindrical algebraic decomposition, integration of rational functions and drawing of curves defined by a bivariate polynomial equation.

The resultant of two univariate polynomials over a field or over a commutative ring is commonly defined as the determinant of their Sylvester matrix.

the vector space (or free module if the coefficients belong to a commutative ring) of dimension i whose elements are the polynomials of degree strictly less than i.

are respectively the roots, counted with their multiplicities, of A and B in any algebraically closed field containing the integral domain.

The following properties hold for the resultant of two polynomials with coefficients in a commutative ring R. If R is a field or more generally an integral domain, the resultant is the unique function of the coefficients of two polynomials that satisfies these properties.

This means that the property of the resultant being zero is invariant under linear and projective changes of the variable.

These properties imply that in the Euclidean algorithm for polynomials, and all its variants (pseudo-remainder sequences), the resultant of two successive remainders (or pseudo-remainders) differs from the resultant of the initial polynomials by a factor which is easy to compute.

Conversely, this allows one to deduce the resultant of the initial polynomials from the value of the last remainder or pseudo-remainder.

arithmetic operations, while the computation of the determinant of the Sylvester matrix with standard algorithms requires

Theoretically, the resultant could be computed by using the formula expressing it as a product of roots differences.

However, as the roots may generally not be computed exactly, such an algorithm would be inefficient and numerically unstable.

The subresultant pseudo-remainder sequences were introduced to solve this problem and avoid any fraction and any GCD computation of coefficients.

In fact, after a linear change of variables, one may suppose that, for each root x of the resultant, there is exactly one value of y such that (x, y) is a common zero of P and Q.

At first glance, it seems that resultants may be applied to a general polynomial system of equations

Given two plane algebraic curves defined as the zeros of the polynomials P(x, y) and Q(x, y), the resultant allows the computation of their intersection.

The antiderivative of such a function involves necessarily logarithms, and generally algebraic numbers (the roots of Q).

All preceding applications, and many others, show that the resultant is a fundamental tool in computer algebra.

The homogeneous resultant has essentially the same properties as the usual resultant, with essentially two differences: instead of polynomial roots, one considers zeros in the projective line, and the degree of a polynomial may not change under a ring homomorphism.

Like the homogeneous resultant, Macaulay's may be defined with determinants, and thus behaves well under ring homomorphisms.

As one is working with polynomials with integer coefficients, this greatest common divisor is defined up to its sign.

The generic Macaulay resultant is the greatest common divisor which becomes 1, when, for each i, zero is substituted for all coefficients of

However, the generic resultant is a polynomial of very high degree (exponential in n) depending on a huge number of indeterminates.

In the case of input polynomials with coefficients in a field, the exact value of the resultant is rarely important, only its equality (or not) to zero matters.

The U-resultant factorizes over an algebraically closed extension of k into a product of linear forms.

In other words, the U-resultant provides a completely explicit version of Bézout's theorem.

The U-resultant as defined by Macaulay requires the number of homogeneous polynomials in the system of equations to be

In 1981, Daniel Lazard extended the notion to the case where the number of polynomials may differ from

runs over the linear space consisting of zero and the homogeneous polynomials of degree

It follows that all solutions of a system of polynomial equations with a finite number of projective zeros can be determined in time

Although this bound is large, it is nearly optimal in the following sense: if all input degrees are equal, then the time complexity of the procedure is polynomial in the expected number of solutions (Bézout's theorem).