In the mathematical discipline of numerical linear algebra, a matrix splitting is an expression which represents a given matrix as a sum or difference of matrices. Many iterative methods depend upon the direct solution of matrix equations involving matrices more general than tridiagonal matrices. These matrix equations can often be solved directly and efficiently when written as a matrix splitting. The technique was devised by Richard S. Varga in 1960.
Regular splittings
We seek to solve the matrix equation where A is a given n × nnon-singular matrix, and k is a given column vector with n components. We split the matrixA into where B and C are n × n matrices. If, for an arbitrary n × n matrix M, M has nonnegative entries, we writeM ≥ 0. If M has only positive entries, we write M > 0. Similarly, if the matrix M1 − M2 has nonnegative entries, we write M1 ≥ M2. Definition: A = B − C is a regular splitting of A if B−1 ≥ 0 and C ≥ 0. We assume that matrix equations of the form where g is a given column vector, can be solved directly for the vector x. If represents a regular splitting of A, then the iterative method where x is an arbitrary vector, can be carried out. Equivalently, we write in the form The matrix D = B−1C has nonnegative entries if represents a regular splitting of A. It can be shown that if A−1 > 0, then < 1, where represents the spectral radius of D, and thus D is a convergent matrix. As a consequence, the iterative method is necessarily convergent. If, in addition, the splitting is chosen so that the matrix B is a diagonal matrix, then B can be inverted in linear time.
Matrix iterative methods
Many iterative methods can be described as a matrix splitting. If the diagonal entries of the matrix A are all nonzero, and we express the matrix A as the matrix sum where D is the diagonal part of A, and U and L are respectively strictly upper and lower triangularn × n matrices, then we have the following. The Jacobi method can be represented in matrix form as a splitting The Gauss-Seidel method can be represented in matrix form as a splitting The method of successive over-relaxation can be represented in matrix form as a splitting
Example
Regular splitting
In equation, let Let us apply the splitting which is used in the Jacobi method: we split A in such a way that B consists of all of the diagonal elements of A, and C consists of all of the off-diagonal elements of A, negated. We have Since B−1 ≥ 0 and C ≥ 0, the splitting is a regular splitting. Since A−1 > 0, the spectral radius < 1. Hence, the matrix D is convergent and the method necessarily converges for the problem. Note that the diagonal elements of A are all greater than zero, the off-diagonal elements of A are all less than zero and A is strictly diagonally dominant. The method applied to the problem then takes the form The exact solution to equation is The first few iterates for equation are listed in the table below, beginning with. From the table one can see that the method is evidently converging to the solution, albeit rather slowly.
0.0
0.0
0.0
0.83333
-3.0000
2.0000
0.83333
-1.7917
1.9000
1.1861
-1.8417
2.1417
1.2903
-1.6326
2.3433
1.4608
-1.5058
2.4477
1.5553
-1.4110
2.5753
1.6507
-1.3235
2.6510
1.7177
-1.2618
2.7257
1.7756
-1.2077
2.7783
1.8199
-1.1670
2.8238
Jacobi method
As stated above, the Jacobi method is the same as the specific regular splitting demonstrated above.
Gauss-Seidel method
Since the diagonal entries of the matrix A in problem are all nonzero, we can express the matrix A as the splitting, where We then have The Gauss-Seidel method applied to the problem takes the form The first few iterates for equation are listed in the table below, beginning with. From the table one can see that the method is evidently converging to the solution, somewhat faster than the Jacobi method described above.
Let ω = 1.1. Using the splitting of the matrix A in problem for the successive over-relaxation method, we have The successive over-relaxation method applied to the problem takes the form The first few iterates for equation are listed in the table below, beginning with. From the table one can see that the method is evidently converging to the solution, slightly faster than the Gauss-Seidel method described above.