Continuous game

A continuous game is a mathematical concept, used in game theory, that generalizes the idea of an ordinary game like tic-tac-toe (noughts and crosses) or checkers (draughts).

In other words, it extends the notion of a discrete game, where the players choose from a finite set of pure strategies.

The continuous game concepts allows games to include more general sets of pure strategies, which may be uncountably infinite.

In general, a game with uncountably infinite strategy sets will not necessarily have a Nash equilibrium solution.

If, however, the strategy sets are required to be compact and the utility functions continuous, then a Nash equilibrium will be guaranteed; this is by Glicksberg's generalization of the Kakutani fixed point theorem.

The class of continuous games is for this reason usually defined and studied as a subset of the larger class of infinite games (i.e. games with infinite strategy sets) in which the strategy sets are compact and the utility functions continuous.

Define the n-player continuous game

As with discrete games, we can define a best response correspondence for player

is a relation from the set of all probability distributions over opponent player profiles to a set of player

The existence of a Nash equilibrium for any continuous game with continuous utility functions can be proven using Irving Glicksberg's generalization of the Kakutani fixed point theorem.

[1] In general, there may not be a solution if we allow strategy spaces,

's which are not compact, or if we allow non-continuous utility functions.

and each utility function can be written as a multivariate polynomial.

In general, mixed Nash equilibria of separable games are easier to compute than non-separable games as implied by the following theorem: Whereas an equilibrium strategy for a non-separable game may require an uncountably infinite support, a separable game is guaranteed to have at least one Nash equilibrium with finitely supported mixed strategies.

Consider a zero-sum 2-player game between players X and Y, with

where The pure strategy best response relations are:

do not intersect, so there is no pure strategy Nash equilibrium.

as a linear combination of the first and second moments of the probability distributions of X and Y: (where

(with similar constraints for y,) are given by Hausdorff as: Each pair of constraints defines a compact convex subset in the plane.

is linear, any extrema with respect to a player's first two moments will lie on the boundary of this subset.

Player i's equilibrium strategy will lie on Note that the first equation only permits mixtures of 0 and 1 whereas the second equation only permits pure strategies.

Moreover, if the best response at a certain point to player i lies on

simply gives the pure strategy

A Nash equilibrium exists when: This determines one unique equilibrium where Player X plays a random mixture of 0 for 1/2 of the time and 1 the other 1/2 of the time.

Player Y plays the pure strategy of 1/2.

Consider a zero-sum 2-player game between players X and Y, with

where This game has no pure strategy Nash equilibrium.

It can be shown[3] that a unique mixed strategy Nash equilibrium exists with the following pair of cumulative distribution functions: Or, equivalently, the following pair of probability density functions: The value of the game is

Consider a zero-sum 2-player game between players X and Y, with

where This game has a unique mixed strategy equilibrium where each player plays a mixed strategy with the Cantor singular function as the cumulative distribution function.