Systems of differential equations: Linear systems of differential equations
Reflection on the solution method
The differential equation of exponential growth \[\frac{\dd y}{\dd t}=r\cdot y\] has the solution \[y(t)=C\cdot e^{r\cdot t}\] with constant \(C\). The exponential function can be written as a power series: \[e^{x}=1+x+\frac{1}{2!}x^2+\frac{1}{3!}x^3+\cdots=\sum_{k=0}^{\infty}\frac{1}{k!}x^k\]
We consider again a pair of coupled homogeneous linear first-order differential equations with constant coefficients of the form \[\left\{\begin{aligned} \frac{\dd x}{\dd t} &= a\, x+ b\, y\\[0.25cm] \frac{\dd y}{\dd t} &= c\, x + d\, y\end{aligned}\right.\] and write it in the matrix-vector form \[\frac{\dd}{\dd t}\cv{x\\y}=\matrix{a & b\\ c & d}\cv{x\\y }\] Name the matrix \[A=\matrix{a & b\\ c & d}\] There is thus an equation of the form \[\frac{\dd}{\dd t}\vec{v}=A\vec{v}\] It is tempting to write the solution as \[\vec{v}=e^{tA}\vec{C}\] with a certain vector \(\vec{C}\) of constants. But what is meant by the exponential function applied to a matrix and how do you calculate this?
The answer to the first question comes from the series expansion of the exponential function:
Definition of exp (A) For a square matrix \(A\) we can define \(\exp(A)=e^A\) as \[e^A=I+\sum_{k=1}^{\infty}\frac{1}{k!}A^k\]
Calculation of exp (A) For the calculation of \(e^A\) we first consider once again the case \(A\) is a \(2\times 2\) diagonal matrix, say \(A=\matrix{\lambda_1 & 0\\ 0 & \lambda_2}\). Then \[\begin{aligned}e^A &=I + \sum_{k=1}^{\infty}\frac{1}{k!}\matrix{\lambda_1 & 0\\ 0 & \lambda_2}^k\\ \\ &= I + \sum_{k=1}^{\infty}\frac{1}{k!}\matrix{\lambda_1^k & 0\\ 0 & \lambda_2^k} \\ \\ &= \matrix{\sum_{k=0}^{\infty}\frac{1}{k!}\lambda_1^k & 0\\ 0 & \sum_{k=0}^{\infty}\frac{1}{k!}\lambda_1^k } \\ \\ &= \matrix{e^{\lambda_1} & 0\\ 0 & e^{\lambda_2}}\end{aligned} \] When \(A\) can brought via a similarity transformation \(T\) into the diagonal form \(\Lambda\), say \(T^{-1}AT=\Lambda\), then you can use \( (T^{-1}AT)^k = T^{-1}A^k T\) and understand that: \[T^{-1}e^AT=e^{T^{-1}e^AT}=e^{\Lambda}\] Thus: \[e^A=Te^{\Lambda}T^{-1}\] From linear algebra we already how to determine \(T\) when two eigenvalues are different, or if there is only one eigenvalue with a 2-dimensional eigenspace: write the eigenvectors as columns in the matrix \(T\). We do not discuss here the case of one eigenvalue with a one-dimensional eigenspace, but the computational work is then doable, too.