Graphon

If we instead start with a graphon that is piecewise constant by: the resulting exchangeable random graph model is the

In order to impose consistency (in the sense of projectivity) between random graphs of different sizes it is natural to study the sequence of adjacency matrices arising as the upper-left

Following the fundamental importance of exchangeable sequences in classical probability, it is natural to look for an analogous notion in the random graph setting.

This is a special case of the Aldous–Hoover theorem for jointly exchangeable arrays and, in this setting, asserts that the random matrix

Due to identifiability issues, it is impossible to estimate either the graphon function

It turns out that for sequences of dense graphs, several apparently distinct notions of convergence are equivalent and under all of them the natural limit object is a graphon.

In the space of graphons, it turns out that such a sequence converges almost surely to the constant

has two corners of "half square" block matrices filled with ones, with the rest of the entries equal to zero.

gets larger, this block structure of the adjacency matrix remains constant, so that this sequence of graphs converges to a "complete bipartite" graphon

by alternating between parts, the adjacency matrix has a chessboard structure of zeros and ones.

This means that when we formally define convergence for a sequence of graphs, the definition of a limit should be agnostic to relabelings of the vertices.

For example, the edge density (i.e. average degree divided by number of vertices) of

, the distance between these two graphs under a "reasonable" metric should be close to zero with high probability for large

There are two natural metrics that behave well on dense random graphs in the sense that we want.

Rather than dealing directly with subgraphs, however, it turns out to be easier to work with graph homomorphisms.

Graphons offer a simple way to compute homomorphism densities.

Because these graphs share the same vertices, one way to measure their distance is to restrict to subsets

of the vertex set, and for each such pair subsets compare the number of edges

In other words, the labeled cut distance encodes the maximum discrepancy of the edge densities between

Note that this definition can still be used even when the graphs being compared do not share a vertex set.

[6] This captures our earlier notion of labeled cut distance, as we have the equality

To make sure isomorphic graphs have distance zero, we should compute the minimum cut norm over all possible "relabellings" of the vertices.

Although not a direct consequence of the definition, if such a sequence of graphs is Cauchy, then it always converges to some graphon

In fact, both notions of convergence are related more strongly through what are called counting lemmas.

This lemma shows that left-convergence implies convergence under the cut distance.

Moreover, it contains the set of all finite graphs, represented by their associated graphons, as a dense subset.

The proof of the strong regularity lemma is similar in concept to Corollary 1 above.

The analytic nature of graphons allows greater flexibility in attacking inequalities related to homomorphisms.

, the random graph achieves (in expectation) the minimum number of copies of

There are extensions of this model to dense directed weighted graphs, often referred to as decorated graphons.

A realization of an exchangeable random graph defined by a graphon . The graphon is shown as a magenta heatmap (lower right). A random graph of size is generated by independently assigning to each vertex a latent random variable (values along vertical axis) and including each edge independently with probability . For example, edge (green, dotted) is present with probability ; the green boxes in the right square represent the values of and . The upper left panel shows the graph realization as an adjacency matrix.