Power iteration


In mathematics, power iteration is an eigenvalue algorithm: given a diagonalizable matrix, the algorithm will produce a number, which is the greatest eigenvalue of, and a nonzero vector, which is a corresponding eigenvector of, that is,.
The algorithm is also known as the Von Mises iteration.
Power iteration is a very simple algorithm, but it may converge slowly. The most time-consuming operation of the algorithm is the multiplication of matrix by a vector, so it is effective for a very large sparse matrix with appropriate implementation.

The method

The power iteration algorithm starts with a vector, which may be an approximation to the dominant eigenvector or a random vector. The method is described by the recurrence relation
So, at every iteration, the vector is multiplied by the matrix and normalized.
If we assume has an eigenvalue that is strictly greater in magnitude than its other eigenvalues and the starting vector has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue, then a subsequence converges to an eigenvector associated with the dominant eigenvalue.
Without the two assumptions above, the sequence does not necessarily converge. In this sequence,
where is an eigenvector associated with the dominant eigenvalue, and. The presence of the term implies that does not converge unless. Under the two assumptions listed above, the sequence defined by
converges to the dominant eigenvalue.
One may compute this with the following algorithm :

  1. !/usr/bin/env python
import numpy as np
def power_iteration:
# Ideally choose a random vector
# To decrease the chance that our vector
# Is orthogonal to the eigenvector
b_k = np.random.rand
for _ in range:
# calculate the matrix-by-vector product Ab
b_k1 = np.dot
# calculate the norm
b_k1_norm = np.linalg.norm
# re normalize the vector
b_k = b_k1 / b_k1_norm
return b_k
power_iteration

The vector to an associated eigenvector. Ideally, one should use the Rayleigh quotient in order to get the associated eigenvalue.
This algorithm is used to calculate the Google PageRank.
The method can also be used to calculate the spectral radius by computing the Rayleigh quotient

Analysis

Let be decomposed into its Jordan canonical form:, where the first column of is an eigenvector of corresponding to the dominant eigenvalue. Since the dominant eigenvalue of is unique, the first Jordan block of is the matrix where is the largest eigenvalue of A in magnitude. The starting vector can be written as a linear combination of the columns of V:
By assumption, has a nonzero component in the direction of the dominant eigenvalue, so.
The computationally useful recurrence relation for can be rewritten as:
where the expression: is more amenable to the following analysis.
The expression above simplifies as
The limit follows from the fact that the eigenvalue of is less than 1 in magnitude, so
It follows that:
Using this fact, can be written in a form that emphasizes its relationship with when k is large:
where and as
The sequence is bounded, so it contains a convergent subsequence. Note that the eigenvector corresponding to the dominant eigenvalue is only unique up to a scalar, so although the sequence may not converge,
is nearly an eigenvector of A for large k.
Alternatively, if A is diagonalizable, then the following proof yields the same result
Let λ1, λ2,..., λm be the m eigenvalues of A and let v1, v2,..., vm be the corresponding eigenvectors. Suppose that is the dominant eigenvalue, so that for.
The initial vector can be written:
If is chosen randomly, then c1 ≠ 0 with probability 1. Now,
On the other hand:
Therefore, converges to the eigenvector. The convergence is geometric, with ratio
where denotes the second dominant eigenvalue. Thus, the method converges slowly if there is an eigenvalue close in magnitude to the dominant eigenvalue.

Applications

Although the power iteration method approximates only one eigenvalue of a matrix, it remains useful for certain computational problems. For instance, Google uses it to calculate the PageRank of documents in their search engine, and Twitter uses it to show users recommendations of whom to follow. The power iteration method is especially suitable for sparse matrices, such as the web matrix, or as the matrix-free method that does not require storing the coefficient matrix explicitly, but can instead access a function evaluating matrix-vector products. For non-symmetric matrices that are well-conditioned the power iteration method can outperform more complex Arnoldi iteration. For symmetric matrices, the power iteration method is rarely used, since its convergence speed can be easily increased without sacrificing the small cost per iteration; see, e.g., Lanczos iteration and LOBPCG.
Some of the more advanced eigenvalue algorithms can be understood as variations of the power iteration. For instance, the inverse iteration method applies power iteration to the matrix. Other algorithms look at the whole subspace generated by the vectors. This subspace is known as the Krylov subspace. It can be computed by Arnoldi iteration or Lanczos iteration.