Fixed-point computation

Various algorithms have been devised for computing an approximate fixed point.

Such algorithms are used in economics for computing a market equilibrium, in game theory for computing a Nash equilibrium, and in dynamic system analysis.

But for general functions, it is impossible to compute a fixed point precisely, since it can be an arbitrary real number.

Fixed-point computation algorithms look for approximate fixed points.

The most basic step of a fixed-point computation algorithm is a value query: given any

The accuracy of the approximate fixed-point depends upon the error in the oracle

The run-time complexity of an algorithm is usually given by the number of required evaluations.

Every contractive function satisfying Brouwer's conditions has a unique fixed point.

Sikorski and Wozniakowski[4] showed that Banach's algorithm is optimal when the dimension is large.

-relative fixed-point is larger than 50% the number of evaluations required by the iteration algorithm.

Shellman and Sikorski[8] presented an algorithm called BEFix (Bisection Envelope Fixed-point) for computing an ε-residual fixed-point of a two-dimensional function with '

They later[9] presented an improvement called BEDFix (Bisection Envelope Deep-cut Fixed-point), with the same worst-case guarantee but better empirical performance.

Shellman and Sikorski[2] presented an algorithm called PFix for computing an ε-residual fixed-point of a d-dimensional function with L ≤ 1, using

queries using the bisection method: start with the interval

For any integer T > 1 and any sequence of T of evaluation queries (possibly adaptive), one can construct two functions that are Lipschitz-continuous with constant

Any algorithm using T evaluations cannot differentiate between these functions, so cannot find a δ-absolute fixed-point.

This is true for any finite integer T. Several algorithms based on function evaluations have been developed for finding an ε-residual fixed-point In the worst case, the number of function evaluations required by all these algorithms is exponential in the binary representation of the accuracy, that is, in

Hirsch, Papadimitriou and Vavasis proved that[3] any algorithm based on function evaluations, that finds an ε-residual fixed-point of f, requires

More precisely: The latter result leaves a gap in the exponent.

, the number of queries required for computing an ε-residual fixed-point is in

Chen and Deng[18] prove that, for any d ≥ 2 and n > 48d, computing such a fixed point requires

Chen and Deng[19] define a different discrete-fixed-point problem, which they call 2D-BROUWER.

The goal is to find a square in the grid, in which all three labels occur.

The problem can be reduced to 2D-SPERNER (computing a fully-labeled triangle in a triangulation satisfying the conditions to Sperner's lemma), and therefore it is PPAD-complete.

This implies that computing an approximate fixed-point is PPAD-complete even for very simple functions.

Fixed-point computation is a special case of root-finding: given a function

The opposite is not true: finding an approximate root of a general function may be harder than finding an approximate fixed point.

Roughgarden and Weinstein[21] studied the communication complexity of computing an approximate fixed-point.

Both functions are Lipschitz continuous and satisfy Brouwer's conditions.

The goal is to compute an approximate fixed point of the composite function

an example function with three fixed points
The graph of an example function with three fixed points
computing a fixed point using function iteration
Computing a fixed point using function iteration