Eigenvalues and eigenvectors: Eigenvalues and eigenvectors in Python
Eigenvalues and eigenvectors
You can compute eigenvalues and eigenvectors in MATLAB with the function eig
, applied to both numerical and symbolic matrices. Below we give two examples: \[A=\matrix{1 & -3 & 3 \\ 3 & -5 & 3\\ 6 & -6 & 4}\quad\text{and}\quad \matrix{-1 & 1 & -1 \\ -7 & 5 & -1\\ 6 & -6 & -2}\] These two matrices have the same characteristic polynomial \((t+2)^2(t - 4)\), but \(A\) is diagonalizable while \(B\) is not. This is because the eigenspace for eigenvalue \(-2\) of matrix \(A\) has two eigenvectors which is not a multiple of each other, while this is not true in the case of matrix \(B\).
>>> import sympy as sy
>>> sy.init_printing(use_unicode=True) # for pretty printing
>>> A = sy.Matrix([[1, -3, 3], [3, -5, 3], [6, -6, 4]]); A
⎡1 -3 3⎤
⎢ ⎥
⎢3 -5 3⎥
⎢ ⎥
⎣6 -6 4⎦
>>> B = sy.Matrix([[-3, 1, -1], [-7, 5, -1], [-6, 6, -2]]); B
⎡-3 1 -1⎤
⎢ ⎥
⎢-7 5 -1⎥
⎢ ⎥
⎣-6 6 -2⎦
>>> t = sy.symbols('t')
>>> sy.factor(A.charpoly(t)) # characteristic polynomial of A in t
2
(t - 4)⋅(t + 2)
>>> A.eigenvals() # eigenvalues of A with algebraic multiplicities
{-2: 2, 4: 1}
>>> A.eigenvects() # eigenvectors and eigenvalues of A
⎡⎛ ⎡⎡1⎤ ⎡-1⎤⎤⎞ ⎛ ⎡⎡1/2⎤⎤⎞⎤
⎢⎜ ⎢⎢ ⎥ ⎢ ⎥⎥⎟ ⎜ ⎢⎢ ⎥⎥⎟⎥
⎢⎜-2, 2, ⎢⎢1⎥, ⎢0 ⎥⎥⎟, ⎜4, 1, ⎢⎢1/2⎥⎥⎟⎥
⎢⎜ ⎢⎢ ⎥ ⎢ ⎥⎥⎟ ⎜ ⎢⎢ ⎥⎥⎟⎥
⎣⎝ ⎣⎣0⎦ ⎣1 ⎦⎦⎠ ⎝ ⎣⎣ 1 ⎦⎦⎠⎦
>>> sy.factor(B.charpoly(t)) # characteristic polynomialof B in t
2
(t - 4)⋅(t + 2)
>>> B.eigenvals() # eigenvalues of B
{-2: 2, 4: 1}
>>> B.eigenvects() # eigenvectors and eigenvalues of B
⎡⎛ ⎡⎡1⎤⎤⎞ ⎛ ⎡⎡0⎤⎤⎞⎤
⎢⎜ ⎢⎢ ⎥⎥⎟ ⎜ ⎢⎢ ⎥⎥⎟⎥
⎢⎜-2, 2, ⎢⎢1⎥⎥⎟, ⎜4, 1, ⎢⎢1⎥⎥⎟⎥
⎢⎜ ⎢⎢ ⎥⎥⎟ ⎜ ⎢⎢ ⎥⎥⎟⎥
⎣⎝ ⎣⎣0⎦⎦⎠ ⎝ ⎣⎣1⎦⎦⎠⎦
Numerical computations are more efficient, but the output differs (in a non-essential sense).
>>> import numpy as np
>>> import numpy.linalg as la
>>> A = np.array([[1, -3, 3], [3, -5, 3], [6, -6, 4]]); print(A)
[[ 1 -3 3]
[ 3 -5 3]
[ 6 -6 4]]
>>> B = np.array([[-3, 1, -1], [-7, 5, -1], [-6, 6, -2]]); print(B)
[[-3 1 -1]
[-7 5 -1]
[-6 6 -2]]
>>> t, v = la.eig(A) # eigenvectors and eigenvalues of A
>>> print(t) # eigenvalues of A
[ 4. -2. -2.]
>>> print(v) # eigenvectors of A as columns in a matrix
[[-0.40824829 -0.81034214 0.1932607 ]
[-0.40824829 -0.31851537 -0.59038328]
[-0.81649658 0.49182677 -0.78364398]]
>>> Avmintv = A@(v[:,0])-t[0]*v[:,0]; print(Avmintv) # check 1st eigenvector
[-8.8817842e-16 -4.4408921e-16 -8.8817842e-16]
>>> Avmintv = np.round(Avmintv); print(Avmintv)
[-0. -0. -0.]
>>> t, v = la.eig(B) # eigenvectors and eigenvalues of B
>>> print(t) # eigenwaarden van B
[ 4. -1.99999996 -2.00000004]
>>> t = np.round(t, 6)
>>> print(t) # eigenvalues of B
[ 4. -2. -2.]
>>> print(v) # eigenvectors of B as columns in a matrix
[[ 8.80956890e-17 -7.07106781e-01 7.07106781e-01]
[-7.07106781e-01 -7.07106781e-01 7.07106781e-01]
[-7.07106781e-01 2.63837509e-08 2.63837487e-08]]
>>> v = np.round(v, 3)
>>> print(v)
[[ 0. -0.707 0.707]
[-0.707 -0.707 0.707]
[-0.707 0. 0. ]]
In this case you must notice all by yourself that the last two column vectors of \(V\) are in fact multiples of each other and that the eigenspace of \(B\) for eigenvalue \(-2\) is spanned by one eigenvector, even though the numerical responses indicate different result by numerical error of rounding..