Cramer's rule


In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-hand-sides of the equations. It is named after Gabriel Cramer, who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748.
Cramer's rule implemented in a naïve way is computationally inefficient for systems of more than two or three equations. In the case of equations in unknowns, it requires computation of determinants, while Gaussian elimination produces the result with the same computational complexity as the computation of a single determinant. Cramer's rule can also be numerically unstable even for 2×2 systems. However, it has recently been shown that Cramer's rule can be implemented in O time, which is comparable to more common methods of solving systems of linear equations, such as Gaussian elimination, while exhibiting comparable numeric stability in most cases.

General case

Consider a system of linear equations for unknowns, represented in matrix multiplication form as follows:
where the matrix has a nonzero determinant, and the vector is the column vector of the variables. Then the theorem states that in this case the system has a unique solution, whose individual values for the unknowns are given by:
where is the matrix formed by replacing the -th column of by the column vector.
A more general version of Cramer's rule considers the matrix equation
where the matrix has a nonzero determinant, and, are matrices. Given sequences and, let be the submatrix of with rows in and columns in. Let be the matrix formed by replacing the column of by the column of, for all. Then
In the case, this reduces to the normal Cramer's rule.
The rule holds for systems of equations with coefficients and unknowns in any field, not just in the real numbers.

Proof

The proof for Cramer's rule uses just two properties of determinants: linearity with respect to any given column, and the fact that the determinant is zero whenever two columns are equal.
Fix the index j of a column. Linearity means that if we consider only column j as variable, the resulting function can be given by a matrix, with one row and n columns, that acts on column j. In fact this is precisely what Laplace expansion does, writing for certain coefficients C1,..., Cn that depend on the columns of other than column j. The value is then the result of applying the one-line matrix to column j of. If is applied to any other column k of, then the result is the determinant of the matrix obtained from by replacing column j by a copy of column k, so the resulting determinant is 0.
Now consider a system of linear equations in unknowns, whose coefficient matrix is, with det assumed to be nonzero:
If one combines these equations by taking C1 times the first equation, plus C2 times the second, and so forth until Cn times the last, then the coefficient of will become, while the coefficients of all other unknowns become 0; the left hand side becomes simply detxj. The right hand side is, which is applied to the column vector b of the right hand side. In fact what has been done here is multiply the matrix equation on the left by. Dividing by the nonzero number det one finds the following equation, necessary to satisfy the system:
But by construction the numerator is the determinant of the matrix obtained from by replacing column j by b, so we get the expression of Cramer's rule as a necessary condition for a solution. The same procedure can be repeated for other values of j to find values for the other unknowns.
The only point that remains to prove is that these values for the unknowns, the only possible ones, do indeed together form a solution. But if the matrix is invertible with inverse, then will be a solution, thus showing its existence. To see that is invertible when det is nonzero, consider the matrix M obtained by stacking the one-line matrices on top of each other for j = 1,..., n. It was shown that where appears at the position j; from this it follows that. Therefore,
completing the proof.
For other proofs, see [|below].

Finding inverse matrix

Let be an matrix. Then
where adj denotes the adjugate matrix of, is the determinant, and I is the identity matrix. If det is invertible in R, then the inverse matrix of is
If R is a field, then this gives a formula for the inverse of, provided. In fact, this formula will work whenever R is a commutative ring, provided that det is a unit. If det is not a unit, then is not invertible.

Applications

Explicit formulas for small systems

Consider the linear system
which in matrix format is
Assume nonzero. Then, with help of determinants, and can be found with Cramer's rule as
The rules for matrices are similar. Given
which in matrix format is
Then the values of and can be found as follows:

Differential geometry

Ricci calculus

Cramer's rule is used in the Ricci calculus in various calculations involving the Christoffel symbols of the first and second kind.
In particular, Cramer's rule can be used to prove that the divergence operator on a Riemannian manifold is invariant with respect to change of coordinates. We give a direct proof, suppressing the role of the Christoffel symbols.
Let be a Riemannian manifold equipped with local coordinates. Let be a vector field. We use the summation convention throughout.
Let be a coordinate transformation with non-singular Jacobian. Then the classical transformation laws imply that where. Similarly, if, then.
Writing this transformation law in terms of matrices yields, which implies.
Now one computes
In order to show that this equals
it is necessary and sufficient to show that
which is equivalent to
Carrying out the differentiation on the left-hand side, we get:
where denotes the matrix obtained from by deleting the th row and th column.
But Cramer's Rule says that
is the th entry of the matrix.
Thus
completing the proof.

Computing derivatives implicitly

Consider the two equations and. When u and v are independent variables, we can define and
An equation for can be found by applying Cramer's rule.
First, calculate the first derivatives of F, G, x, and y:
Substituting dx, dy into dF and dG, we have:
Since u, v are both independent, the coefficients of du, dv must be zero. So we can write out equations for the coefficients:
Now, by Cramer's rule, we see that:
This is now a formula in terms of two Jacobians:
Similar formulas can be derived for

Integer programming

Cramer's rule can be used to prove that an integer programming problem whose constraint matrix is totally unimodular and whose right-hand side is integer, has integer basic solutions. This makes the integer program substantially easier to solve.

Ordinary differential equations

Cramer's rule is used to derive the general solution to an inhomogeneous linear differential equation by the method of variation of parameters.

Geometric interpretation

Cramer's rule has a geometric interpretation that can be considered also a proof or simply giving insight about its geometric nature. These geometric arguments work in general and not only in the case of two equations with two unknowns presented here.
Given the system of equations
it can be considered as an equation between vectors
The area of the parallelogram determined by and is given by the determinant of the system of equations:
In general, when there are more variables and equations, the determinant of vectors of length will give the volume of the parallelepiped determined by those vectors in the -th dimensional Euclidean space.
Therefore, the area of the parallelogram determined by and has to be times the area of the first one since one of the sides has been multiplied by this factor. Now, this last parallelogram, by Cavalieri's principle, has the same area as the parallelogram determined by and
Equating the areas of this last and the second parallelogram gives the equation
from which Cramer's rule follows.

Other proofs

A proof by abstract linear algebra

This is a restatement of the proof above in abstract language.
Consider the map where is the matrix with substituted in the th column, as in Cramer's rule. Because of linearity of determinant in every column, this map is linear. Observe that it sends the
th column of to the th basis vector , because determinant of a matrix with a repeated column is 0. So we have a linear map which agrees with the inverse of on the column space; hence it agrees with on the span of the column space. Since is invertible, the column vectors span all of, so our map really is the inverse of. Cramer's rule follows.

A short proof

A short proof of Cramer's rule can be given by noticing that is the determinant of the matrix
On the other hand, assuming that our original matrix is invertible, this matrix has columns, where is the n-th column of the matrix. Recall that the matrix has columns, and therefore. Hence, by using that the determinant of the product of two matrices is the product of the determinants, we have
The proof for other is similar.

Proof using [Clifford algebra]

Consider the system of three scalar equations in three unknown scalars
and assign an orthonormal vector basis for as
Let the vectors
Adding the system of equations, it is seen that
Using the exterior product, each unknown scalar can be solved as
For equations in unknowns, the solution for the -th unknown generalizes to
If are linearly independent, then the can be expressed in determinant form identical to Cramer’s Rule as
where denotes the substitution of vector with vector in the -th numerator position.

Incompatible and indeterminate cases

A system of equations is said to be incompatible or inconsistent when there are no solutions and it is called indeterminate when there is more than one solution. For linear equations, an indeterminate system will have infinitely many solutions, since the solutions can be expressed in terms of one or more parameters that can take arbitrary values.
Cramer's rule applies to the case where the coefficient determinant is nonzero. In the 2×2 case, if the coefficient determinant is zero, then the system is incompatible if the numerator determinants are nonzero, or indeterminate if the numerator determinants are zero.
For 3×3 or higher systems, the only thing one can say when the coefficient determinant equals zero is that if any of the numerator determinants are nonzero, then the system must be incompatible. However, having all determinants zero does not imply that the system is indeterminate. A simple example where all determinants vanish but the system is still incompatible is the 3×3 system x+y+z=1, x+y+z=2, x+y+z=3.