Laplacian matrix
In the mathematical field of graph theory, the Laplacian matrix, also called the graph Laplacian, admittance matrix, Kirchhoff matrix or discrete Laplacian, is a matrix representation of a graph. The Laplacian matrix can be used to find many useful properties of a graph. Together with Kirchhoff's theorem, it can be used to calculate the number of spanning trees for a given graph. The sparsest cut of a graph can be approximated through the second smallest eigenvalue of its Laplacian by Cheeger's inequality. It can also be used to construct low dimensional embeddings, which can be useful for a variety of machine learning applications.
Definition
Laplacian matrix for ''simple graphs''
Given a simple graph with vertices, its Laplacian matrix isdefined as:
where D is the degree matrix and A is the adjacency matrix of the graph. Since is a simple graph, only contains 1s or 0s and its diagonal elements are all 0s.
In the case of directed graphs, either the indegree or outdegree might be used, depending on the application.
The elements of are given by
where is the degree of the vertex.
Symmetric normalized Laplacian
The symmetric normalized Laplacian matrix is defined as:The elements of are given by
Random walk normalized Laplacian
The random-walk normalized Laplacian matrix is defined as:The elements of are given by
Generalized Laplacian
The generalized Laplacian is defined as:Notice the ordinary Laplacian is a generalized Laplacian.
Example
Here is a simple example of a labelled, undirected graph and its Laplacian matrix.Labelled graph | Degree matrix | Adjacency matrix | Laplacian matrix |
Properties
For an graph G and its Laplacian matrix L with eigenvalues :- L is symmetric.
- L is positive-semidefinite. This is verified in the [|incidence matrix] section. This can also be seen from the fact that the Laplacian is symmetric and diagonally dominant.
- L is an M-matrix.
- Every row sum and column sum of L is zero. Indeed, in the sum, the degree of the vertex is summed with a "−1" for each neighbor.
- In consequence,, because the vector satisfies This also implies that the Laplacian matrix is singular.
- The number of connected components in the graph is the dimension of the nullspace of the Laplacian and the algebraic multiplicity of the 0 eigenvalue.
- The smallest non-zero eigenvalue of L is called the spectral gap.
- The second smallest eigenvalue of L is the algebraic connectivity of G and approximates the sparsest cut of a graph.
- The Laplacian is an operator on the n-dimensional vector space of functions, where is the vertex set of G, and.
- When G is k-regular, the normalized Laplacian is:, where A is the adjacency matrix and I is an identity matrix.
- For a graph with multiple connected components, L is a block diagonal matrix, where each block is the respective Laplacian matrix for each component, possibly after reordering the vertices.
- The trace of the Laplacian matrix L is equal to where is the number of edges of the considered graph.
Incidence matrix
Then the Laplacian matrix L satisfies
where is the matrix transpose of M.
Now consider an eigendecomposition of, with unit-norm eigenvectors and corresponding eigenvalues :
Because can be written as the inner product of the vector with itself, this shows that and so the eigenvalues of are all non-negative.
Deformed Laplacian
The deformed Laplacian is commonly defined aswhere I is the unit matrix, A is the adjacency matrix, and D is the degree matrix, and s is a number. The standard Laplacian is just.
Signless Laplacian
The signless Laplacian is defined aswhere is the degree matrix, and is the adjacency matrix. Like the signed Laplacian, the signless Laplacian also is positive semi-definite as it can be factored aswhere is the incidence matrix. has a 0-eigenvector if and only if it has a bipartite connected component other than isolated vertices. This can be shown asThis has a solution where if and only if the graph has a bipartite connected component.Symmetric normalized Laplacian
The normalized Laplacian is defined aswhere L is the Laplacian, A is the adjacency matrix and D is the degree matrix. Since the degree matrix D is diagonal and positive, its reciprocal square root is just the diagonal matrix whose diagonal entries are the reciprocals of the positive square roots of the diagonal entries of D. The symmetric normalized Laplacian is a symmetric matrix.
One has:, where S is the matrix whose rows are indexed by the vertices and whose columns are indexed by the edges of G such that each column corresponding to an edge e = has an entry in the row corresponding to u, an entry in the row corresponding to v, and has 0 entries elsewhere..
All eigenvalues of the normalized Laplacian are real and non-negative. We can see this as follows. Since is symmetric, its eigenvalues are real. They are also non-negative: consider an eigenvector of with eigenvalue λ and suppose. Then:
where we use the inner product, a sum over all vertices v, and denotes the sum over all unordered pairs of adjacent vertices. The quantity is called the Dirichlet sum of f, whereas the expression is called the Rayleigh quotient of g.
Let 1 be the function which assumes the value 1 on each vertex. Then is an eigenfunction of with eigenvalue 0.
In fact, the eigenvalues of the normalized symmetric Laplacian satisfy 0 = μ0 ≤ … ≤ μn−1 ≤ 2. These eigenvalues relate well to other graph invariants for general graphs.
Random walk normalized Laplacian
The random walk normalized Laplacian is defined aswhere D is the degree matrix. Since the degree matrix D is diagonal, its inverse is simply defined as a diagonal matrix, having diagonal entries which are the reciprocals of the corresponding positive diagonal entries of D.
For the isolated vertices, a common choice is to set the corresponding element to 0.
This convention results in a nice property that the multiplicity of the eigenvalue 0 is equal to the number of connected components in the graph.
The matrix elements of are given by
The name of the random-walk normalized Laplacian comes from the fact that this matrix is, where is simply the transition matrix
of a random walker on the graph. For example, let denote the i-th standard basis vector. Then is a probability vector representing the distribution of a random walker's locations after taking a single step from vertex ; i.e.,. More generally, if the vector is a probability distribution of the location of a random walker on the vertices of the graph, then is the probability distribution of the walker after steps.
One can check that
i.e., is similar to the normalized Laplacian. For this reason, even if is in general not hermitian, it has real eigenvalues. Indeed, its eigenvalues agree with those of .
Graphs
As an aside about random walks on graphs, consider a simple undirected graph. Consider the probability that the walker is at the vertex i at time t, given the probability distribution that he was at vertex j at time t − 1 :or in matrix-vector notation:
We can rewrite this relation as
is a symmetric matrix called the reduced adjacency matrix. So, taking steps on this random walk requires taking powers of, which is a simple operation because is symmetric.
Interpretation as the discrete Laplace operator
The Laplacian matrix can be interpreted as a matrix representation of a particular case of the discrete Laplace operator. Such an interpretation allows one, e.g., to generalise the Laplacian matrix to the case of graphs with an infinite number of vertices and edges, leading to a Laplacian matrix of an infinite size.Suppose describes a heat distribution across a graph, where is the heat at vertex. According to Newton's law of cooling, the heat transferred between nodes and is proportional to if nodes and are connected. Then, for heat capacity,
In matrix-vector notation,
which gives
Notice that this equation takes the same form as the heat equation, where the matrix −L is replacing the Laplacian operator ; hence, the "graph Laplacian".
To find a solution to this differential equation, apply standard techniques for solving a first-order matrix differential equation. That is, write as a linear combination of eigenvectors of L, with time-dependent
Plugging into the original expression :
whose solution is
As shown before, the eigenvalues of L are non-negative, showing that the solution to the diffusion equation approaches an equilibrium, because it only exponentially decays or remains constant. This also shows that given and the initial condition, the solution at any time t can be found.
To find for each in terms of the overall initial condition, simply project onto the unit-norm eigenvectors ;
In the case of undirected graphs, this works because is symmetric, and by the spectral theorem, its eigenvectors are all orthogonal. So the projection onto the eigenvectors of is simply an orthogonal coordinate transformation of the initial condition to a set of coordinates which decay exponentially and independently of each other.
Equilibrium behavior
To understand, the only terms that remain are those where, sinceIn other words, the equilibrium state of the system is determined completely by the kernel of.
Since by definition,, the vector of all ones is in the kernel. If there are disjoint connected components in the graph, then this vector of all ones can be split into the sum of independent eigenvectors of ones and zeros, where each connected component corresponds to an eigenvector with ones at the elements in the connected component and zeros elsewhere.
The consequence of this is that for a given initial condition for a graph with vertices
where
For each element of, i.e. for each vertex in the graph, it can be rewritten as
In other words, at steady state, the value of converges to the same value at each of the vertices of the graph, which is the average of the initial values at all of the vertices. Since this is the solution to the heat diffusion equation, this makes perfect sense intuitively. We expect that neighboring elements in the graph will exchange energy until that energy is spread out evenly throughout all of the elements that are connected to each other.
Example of the operator on a grid
This section shows an example of a function diffusing over time through a graph. The graph in this example is constructed on a 2D discrete grid, with points on the grid connected to their eight neighbors. Three initial points are specified to have a positive value, while the rest of the values in the grid are zero. Over time, the exponential decay acts to distribute the values at these points evenly throughout the entire grid.The complete Matlab source code that was used to generate this animation is provided below. It shows the process of specifying initial conditions, projecting these initial conditions onto the eigenvalues of the Laplacian Matrix, and simulating the exponential decay of these projected initial conditions.
N = 20; % The number of pixels along a dimension of the image
A = zeros; % The image
Adj = zeros; % The adjacency matrix
% Use 8 neighbors, and fill in the adjacency matrix
dx = ;
dy = ;
for x = 1:N
for y = 1:N
index = * N + y;
for ne = 1:length
newx = x + dx;
newy = y + dy;
if newx > 0 && newx <= N && newy > 0 && newy <= N
index2 = * N + newy;
Adj = 1;
end
end
end
end
% BELOW IS THE KEY CODE THAT COMPUTES THE SOLUTION TO THE DIFFERENTIAL EQUATION
Deg = diag; % Compute the degree matrix
L = Deg - Adj; % Compute the laplacian matrix in terms of the degree and adjacency matrices
= eig; % Compute the eigenvalues/vectors of the laplacian matrix
D = diag;
% Initial condition
C0 = zeros;
C0 = 5;
C0 = 10;
C0 = 7;
C0 = C0;
C0V = V'*C0; % Transform the initial condition into the coordinate system
% of the eigenvectors
for t = 0:0.05:5
% Loop through times and decay each initial component
Phi = C0V.* exp; % Exponential decay for each component
Phi = V * Phi; % Transform from eigenvector coordinate system to original coordinate system
Phi = reshape;
% Display the results and write to GIF file
imagesc;
caxis;
title;
frame = getframe;
im = frame2im;
= rgb2ind;
if t 0
imwrite;
else
imwrite;
end
end
Approximation to the negative continuous Laplacian
The graph Laplacian matrix can be further viewed as a matrix form of an approximation to the Laplacian operator obtained by the finite difference method.In this interpretation, every graph vertex is treated as a grid point; the local connectivity of the vertex determines the finite difference approximation stencil at this grid point, the grid size is always one for every edge, and there are no constraints on any grid points, which corresponds to the case of the homogeneous Neumann boundary condition, i.e., free boundary.
Directed multigraphs
An analogue of the Laplacian matrix can be defined for directed multigraphs. In this case the Laplacian matrix L is defined aswhere D is a diagonal matrix with Di,i equal to the outdegree of vertex i and A is a matrix with Ai,j equal to the number of edges from i to j.