Matrices: Matrices
Computing determinants
We describe a number of properties of the determinant of a matrix that help in calculations.
Let \(A\) be a square matrix. Then:
- if \(A\) contains a n null row or null column, then \(\det(A)=0\);
- if \(A\) contains two identical rows or two identical columns, then \(\det(A)=0\);
- if \(A\) is an upper or lower triangular matrix, then \(\det(A)\) is equal to the product of the diagonal elements;
- \(\det(A^{\top})=\det(A)\).
Let \(B\) be a matrix that is obtained from the square matrix \(A\) by
- multiplying a row or column with a scalar \(c\) t, then \(\det(B)=c\cdot\det(A)\);
- interchanging two rows or two columns of \(A\), then \(\det(B)=-\det(A)\);
- adding a scalar multiple of a row (column) of \(A\) to another row (column), then \(\det(B)=\det(A)\).
The rules above describe how the determinant changes for elementary row and column operations. If you keep the accounts consistently during reduction and have arrived, for example, at a lower or upper triangular form, then you can easily calculate a determinant. When you arrive during this reduction process in a situation of a row or column in which 1 appears at index \((i,j)\) and for the rest in the horizontal or vertical direction only zeros, then you can calculate the determinant obtained from the matrix by omitting the \(i\)th row and \(j\)th column, and by multiplying this result by \((-1)^{i+j}\).
We give two examples of calculations of determinants.
References to columns go through \(C_1,\ldots, C_4\).
\[\begin{array}{rlll} \left|\begin{array}{rrrr}3 & -2 & -5 & 4\\ -5 & 2 & 8 & -5\\ -2 & 4 & 7 & -3\\ 2 & -3 &- 5 & 8\end{array}\right| &=& \left|\begin{array}{rrrr}1 & 2 & 2 & 1\\ -5 & 2 & 8 & -5\\ -2 & 4 & 7 & -3\\ 0 & 1 & 2 & 5\end{array}\right| & \blue{\begin{array}{l}R_1+R_3\\ \\ \\ R_3+R_4\end{array}}\\ \\
&=& \left|\begin{array}{rrrr}1 & 2 & 2 & 1\\ 0 & 12 & 18 & 0\\ 0 & 8 & 11 & -1\\ 0 & 1 & 2 & 5\end{array}\right| & \blue{\begin{array}{l} \\ R_2+5R_1\\ R_3+2R_1\\ \\ \end{array}}\\ \\
&=& \left|\begin{array}{rrr} 12 & 18 & 0\\ 8 & 11 & -1\\ 1 & 2 & 5\end{array}\right| & \\ \\
&=& 6\cdot \left|\begin{array}{rrr} 2 & 3 & 0\\ 8 & 11 & -1\\ 1 & 2 & 5\end{array}\right| & \\ \\
&=& 6\cdot \left|\begin{array}{rrr} 2 & 0 & 0\\ 8 & -1 & -1\\ 1 & \frac{1}{2} & 5\end{array}\right| & \blue{\begin{array}{l}\\ C_2\rightarrow C_2-\frac{3}{2}C_1 \\ \\ \end{array}}\\ \\
&=& 6\cdot 2\cdot \left|\begin{array}{rr} -1 & -1\\ \frac{1}{2} & 5\end{array}\right| & \\ \\
&=& 6\cdot 2\cdot (-1\cdot 5- ((-1)\cdot \dfrac{1}{2}) & \\
&=& 6\cdot 2\cdot (-\dfrac{9}{2}) & \\
&=& -54 &
\end{array}\]
The following theorems are two of the most important theorems about determinants.
Let \(A\) be an \(n\times n\) matrix. Then the following statements are equivalent:
- \(A\) is invertible.
- \(\text{rank}(A)=n\) , i.e., the equation \(A\vec{x}=\vec{0}\) has only \(\vec{0}\) as a solution.
- \(\det(A)\neq 0\).
Let \(A\) and \(B\) be square matrices of the same size. Then: \[\det(A\,B)=\det(A)\cdot \det(B)\tiny.\]
If \(A\) is invertible then it follows from this theorem that \[\det(A^{-1})=\bigl(\det(A)\bigr)^{-1}= \dfrac{1}{\det(A)}\tiny.\]
The following theorem shows that some properties of the determinant can be generalized to matrices whose structure has been defined by submatrices.
The determinant of a square matrix of the form \[M = \matrix{A&C\\ 0&B}\] where #A# and #B# are square submatrices, and #C# is an arbitrary submatrix of appropriate size, is equal to the product of the determinants of the two submatrices along the diagonal: \[\det(M) = \det(A)\cdot\det(B)\]