When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm which finds an optimal solution in a number of steps that is polynomial in the input size.
As an iterative method, a preliminary version was introduced by Naum Z. Shor.
In 1972, an approximation algorithm for real convex minimization was studied by Arkadi Nemirovski and David B. Yudin (Judin).
As an algorithm for solving linear programming problems with rational data, the ellipsoid algorithm was studied by Leonid Khachiyan; Khachiyan's achievement was to prove the polynomial-time solvability of linear programs.
This was a notable step from a theoretical perspective: The standard algorithm for solving linear problems at the time was the simplex algorithm, which has a run time that typically is linear in the size of the problem, but for which examples exist for which it is exponential in the size of the problem.
Khachiyan's work showed, for the first time, that there can be algorithms for solving linear programs whose runtime can be proven to be polynomial.
The ellipsoidal algorithm allows complexity theorists to achieve (worst-case) bounds that depend on the dimension of the problem and on the size of the data, but not on the number of rows, so it remained important in combinatorial optimization theory for many years.
[1][2][3][4] Only in the 21st century have interior-point algorithms with similar complexity properties appeared.
[citation needed] A convex minimization problem consists of the following ingredients.
Finally, we require the existence of a separation oracle for the convex set
, the oracle should return one of two answers:[5] The output of the ellipsoid method is either: Inequality-constrained minimization of a function that is zero everywhere corresponds to the problem of simply identifying any feasible point.
One way to do this is by combining the primal and dual linear programs together into one program, and adding the additional (linear) constraint that the value of the primal solution is no worse than the value of the dual solution.
[6]: 84 Another way is to treat the objective of the linear program as an additional constraint, and use binary search to find the optimum value.
at the center of an ellipsoid We query the cutting-plane oracle to obtain a vector
The update is given by where The stopping criterion is given by the property that At the k-th iteration of the algorithm for constrained minimization, we have a point
recording the smallest objective value of feasible iterates so far.
The run-time complexity guarantee of the ellipsoid method in the real RAM model is given by the following theorem.
Each problem p in the family is represented by a data-vector Data(p), e.g., the real-valued coefficients in matrices and vectors representing the function f and the feasible region G. The size of a problem p, Size(p), is defined as the number of elements (real numbers) in Data(p).
This means that there exists a polynomial Poly such that, for every problem-instance p and every approximation-ratio ε>0, the method finds a solution x satisfying :
Intuitively, it means that the number of operations required for each additional digit of accuracy is polynomial in Size(p).
However, in problems with many variables, the ellipsoid method is very inefficient, as the number of iterations grows as O(n2).
Even on "small"-sized problems, it suffers from numerical instability and poor performance in practice [citation needed].
The ellipsoid method is an important theoretical technique in combinatorial optimization.
In computational complexity theory, the ellipsoid algorithm is attractive because its complexity depends on the number of columns and the digital size of the coefficients, but not on the number of rows.
The ellipsoid method can be used to show that many algorithmic problems on convex sets are polynomial-time equivalent.
Leonid Khachiyan applied the ellipsoid method to the special case of linear programming: minimize cTx s.t.
Otherwise, take the first inequality constraint R1z ≤ r1; replace it with an equality R1z = r1; and apply the decision problem again.
The ellipsoid method has several variants, depending on what cuts exactly are used in each step.[1]: Sec.
However, when K is nonempty, there are examples in which the central-cut method finds a feasible point faster.
There is also a distinction between the circumscribed ellipsoid and the inscribed ellipsoid methods:[8] The methods differ in their runtime complexity (below, n is the number of variables and epsilon is the accuracy): The relative efficiency of the methods depends on the time required for finding a separating hyperplane, which depends on the application: if the runtime is