The remaining question is how to compute the eigenvalues of a matrix. The answer is already a bit present in the examples of computing eigenvectors for a given eigenvalue.
Let \(A\) be a square matrix and \(\lambda\) a scalar. Then the following statements are equivalent:
- \(\lambda\) is an eigenvalue of \(A\).
- \(\text{ker}(A-\lambda I)\neq \{\vec{0}\}\).
- \(\det(A-\lambda I)=0\).
When \(\lambda\) is an eigenvalue, then the eigenspace, denoted by \(E_{\lambda}\), is equal to the kernel of \(A-\lambda I\).
Let \(\lambda\) be an eigenvalue of the square matrix \(A\). Then, there is a column vector \(\vec{v}\) unequal to the zero vector such that \(A\vec{v}=\lambda \vec{v}\). We have called this an eigenvector and \(\lambda\) is the corresponding eigenvalue. In other words, we have \((A-\lambda I) \vec{v}=\vec{0}\). The matrix \(A-\lambda I\) must therefore be singular (non-invertible). Indeed, if the matrix is invertible, then the zero vector is only mapped onto the zero vector. \(A-\lambda I\) is singular if and only if \(\det(A-\lambda I)=0\). The eigenspace \(E_{\lambda}\) is equal to the kernel of \(A-\lambda I\).
Let \(A\) be a square matrix. Then \(\det(A-\lambda I)\) is called the characteristic polynomial of \(A\) and equation \(\det(A-\lambda I)=0\) is called the characteristic equation of \(A\).
In some linear algebra books, the characteristic polynomial of a matrix \(A\) is defined as \(\det(\lambda I-A)\). This is also possible because it provides the same results.
The above statement about an eigenvalue translates into the following theorem.
Let \(A\) be a square matrix and \(\lambda\) a scalar. Then \(\lambda\) is an eigenvalue if it is a solution of the characteristic equation, in other words, if it is a root of the characteristic polynomial.
The algebraic multiplicity of an eigenvalue is the multiplicity as root of the characteristic polynomial. The geometric multiplicity of an eigenvalue is the dimension of the eigenspace, that is, equal to the smallest number of spanning eigenvectors.
In order to find eigenvalues of a matrix, it suffices to determine the characteristic polynomial and calculate all its roots. We give a few examples.
Consider in \(\mathbb{R}^3\) the orthogonal projection \(P\) on the base plane.
In other words, \(P\) is the matrix mapping \(L_P\) with matrix \[P=\matrix{1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 0}\] Determine the eigenvalues and eigenspaces of \(P\).
The characteristic equation is \[\det\matrix{1-\lambda & 0 & 0\\ 0 & 1-\lambda & 0\\ 0 & 0 & -\lambda}=0\] i.e. \[-\lambda(1-\lambda)^2=0\] with the roots \(\lambda =0\) and \(\lambda=1\). The eigenspace for \(\lambda=0\) is the set of solutions of the system \[\matrix{1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 0}\cv{x\\y\\z}=\cv{0\\0\\0}\] and consists of vectors with the first two components equal to zero. This is a line: \[E_{0}=\sbspmatrix{\cv{0\\0\\1}}\] The eigenspace for \(\lambda=1\) is the set of solutions of the system \[\matrix{0 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & -1}\cv{x\\y\\z}=\cv{0\\0\\0}\] and consists of vectors with the third component equal to zero. This is the plane with equation \(z=0\) : \[E_{1}=\sbspmatrix{\cv{1\\0\\0}, \cv{0\\1\\0}}\]