Laplacian matrix

Named after Pierre-Simon Laplace, the graph Laplacian matrix can be viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian obtained by the finite difference method.

Kirchhoff's theorem can be used to calculate the number of spanning trees for a given graph.

The spectral decomposition of the Laplacian matrix allows the construction of low-dimensional embeddings that appear in many machine learning applications and determines a spectral layout in graph drawing.

Graph-based signal processing is based on the graph Fourier transform that extends the traditional discrete Fourier transform by substituting the standard basis of complex sinusoids for eigenvectors of the Laplacian matrix of a graph corresponding to the signal.

Imbalanced weights may undesirably affect the matrix spectrum, leading to the need of normalization — a column/row scaling of the matrix entries — resulting in normalized adjacency and Laplacian matrices.

In its Laplacian matrix, column-sums or row-sums are zero, depending on whether the indegree or outdegree has been used.

oriented incidence matrix B with element Bve for the vertex v and the edge e (connecting vertices

are treated as logical, rather than numerical, values, as in the following example: A vertex with a large degree, also called a heavy node, results in a large diagonal entry in the Laplacian matrix dominating the matrix properties.

right stochastic and hence is the matrix of a random walk, so that the left normalized Laplacian

For a non-symmetric adjacency matrix of a directed graph, one also needs to choose indegree or outdegree for normalization: The left out-degree normalized Laplacian with row-sums all 0 relates to right stochastic

Common in applications graphs with weighted edges are conveniently defined by their adjacency matrices where values of the entries are numeric and no longer limited to zeros and ones.

In spectral clustering and graph-based signal processing, where graph vertices represent data points, the edge weights can be computed, e.g., as inversely proportional to the distances between pairs of data points, leading to all weights being non-negative with larger values informally corresponding to more similar pairs of data points.

Using correlation and anti-correlation between the data points naturally leads to both positive and negative weights.

An alternative cleaner approach, described here, is to separate the weights from the connectivity: continue using the incidence matrix as for regular graphs and introduce a matrix just holding the values of the weights.

A spring system is an example of this model used in mechanics to describe a system of springs of given stiffnesses and unit length, where the values of the stiffnesses play the role of the weights of the graph edges.

are treated as numerical, rather than logical as for simple graphs, values, explaining the difference in the results - for simple graphs, the symmetrized graph still needs to be simple with its symmetrized adjacency matrix having only logical, not numerical values, e.g., the logical sum is 1 v 1 = 1, while the numeric sum is 1 + 1 = 2.

Graph self-loops, i.e., non-zero entries on the main diagonal of the adjacency matrix, do not affect the graph Laplacian values, but may need to be counted for calculation of the normalization factors.

The random walk normalized Laplacian is defined as where D is the degree matrix.

is simply the transition matrix of a random walker on the graph, assuming non-negative weights.

is a probability vector representing the distribution of a random walker's locations after taking a single step from vertex

is a probability distribution of the location of a random walker on the vertices of the graph, then

For a non-symmetric adjacency matrix of a directed graph, one also needs to choose indegree or outdegree for normalization: The left out-degree normalized Laplacian with row-sums all 0 relates to right stochastic

Negative weights present several challenges for normalization: For an (undirected) graph G and its Laplacian matrix L with eigenvalues

The graph Laplacian matrix can be further viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian operator obtained by the finite difference method.

Such an interpretation allows one, e.g., generalizing the Laplacian matrix to the case of graphs with an infinite number of vertices and edges, leading to a Laplacian matrix of an infinite size.

In an alternating current (AC) electrical network, real-valued resistances are replaced by complex-valued impedances.

The weight of edge (i, j) is, by convention, minus the reciprocal of the impedance directly between i and j.

This is one of the rare applications that give rise to complex symmetric matrices.

is constructed as the Hadamard product of the real symmetric matrix of the symmetrized Laplacian and the Hermitian phase matrix with the complex entries which encode the edge direction into the phase in the complex plane.

In the context of quantum physics, the magnetic Laplacian can be interpreted as the operator that describes the phenomenology of a free charged particle on a graph, which is subject to the action of a magnetic field and the parameter

A 2-dimensional spring system.