In the mathematical field of graph theory, the Erdős–Rényi model refers to one of two closely related models for generating random graphs or the evolution of a random network.
[1][2] Edgar Gilbert introduced the other model contemporaneously with and independently of Erdős and Rényi.
In the model introduced by Gilbert, also called the Erdős–Rényi–Gilbert model,[4] each edge has a fixed probability of being present or absent, independently of the other edges.
These models can be used in the probabilistic method to prove the existence of graphs satisfying various properties, or to provide a rigorous definition of what it means for a property to hold for almost all graphs.
There are two closely related variants of the Erdős–Rényi random graph model.
The behavior of random graphs are often studied in the case where
, and by the law of large numbers any graph in G(n, p) will almost surely have approximately this many edges (provided the expected number of edges tends to infinity).
In practice, the G(n, p) model is the one more commonly used today, in part due to the ease of analysis allowed by the independence of the edges.
The distribution of the degree of any particular vertex is binomial:[5] where n is the total number of vertices in the graph.
Since this distribution is Poisson for large n and np = const.
In a 1960 paper, Erdős and Rényi[6] described the behavior of G(n, p) very precisely for various values of p. Their results included that: Thus
For example, there is a k(n) (approximately equal to 2log2(n)) such that the largest clique in G(n, 0.5) has almost surely either size k(n) or k(n) + 1.
[7] Thus, even though finding the size of the largest clique in a graph is NP-complete, the size of the largest clique in a "typical" graph (according to this model) is very well understood.
[8] In percolation theory one examines a finite or infinite graph and removes edges (or links) randomly.
Thus the Erdős–Rényi process is in fact unweighted link percolation on the complete graph.
As percolation theory has much of its roots in physics, much of the research done was on the lattices in Euclidean spaces.
The transition at np = 1 from giant component to small component has analogs for these graphs, but for lattices the transition point is difficult to determine.
Physicists often refer to study of the complete graph as a mean field theory.
Some significant work was also done on percolation on random graphs.
From a physicist's point of view this would still be a mean-field model, so the justification of the research is often formulated in terms of the robustness of the graph, viewed as a communication network.
Given a random graph of n ≫ 1 nodes with an average degree
a giant connected component of order n exists.
The relative size of the giant component, P∞, is given by[6][1][2][9] Both of the two major assumptions of the G(n, p) model (that edges are independent and that each edge is equally likely) may be inappropriate for modeling certain real-life phenomena.
Erdős–Rényi graphs have low clustering, unlike many social networks.
Another alternative family of random graph models, capable of reproducing many real-life phenomena, are exponential random graph models.
The G(n, p) model was first introduced by Edgar Gilbert in a 1959 paper studying the connectivity threshold mentioned above.
[3] The G(n, M) model was introduced by Erdős and Rényi in their 1959 paper.
As with Gilbert, their first investigations were as to the connectivity of G(n, M), with the more detailed analysis following in 1960.
The limit object can be constructed as follows: Applying this procedure, one obtains a sequence of random infinite graphs of decreasing sizes:
The theorem[11] states that this graph corresponds in a certain sense to the limit object of