### Linear mappings: Matrices and coordinate transformations

### Similar matrices

Two square matrices \(A\) and \(B\) are **similar** if there exists an invertible matrix \(T\) such that \(B=T^{-1}A\,T\).

In the previous section we have actually seen the following theorem.

Two matrices represent the same linear mapping if and only if the matrices are similar.

When we solve a problem about a linear mapping we often try to find a 'pretty' matrix mapping corresponding with the given linear transformation. 'Pretty' means for example 'in diagonal form'.

Let \(A\) be a matrix mapping corresponding with a linear mapping \(L\). Then \(L\) is called **diagonalisable** if and only if there exists an invertible matrix \(T\) such that \(T^{-1}A\,T\) is diagonal.

Not every linear mapping or matrix is diagonalisable. For example, \(\matrix{1 & 1\\ 0 & 1}\) is not diagonalisable. The following theorem gives an example of a class of matrices that are diagonalisable.

Every real symmetric matrix is diagonalisable.

Finally, we note that some functions applied to similar matrices yield same value; the determinant of a matrix is such a function:

\[\det(T^{-1}A\,T)=\det(A)\] for square matrices \(A\) and invertible matrices \(T\) of equal dimension as \(A\).

Another function with this property is the **trace** of a matrix, denoted as \(\text{tr}(A)\) and defined by \[\text{tr}\matrix{a_{11} & a_{12} & \ldots & a_{1n}\\ a_{21} & a_{22} & \ldots & a_{1n} \\ \vdots & \vdots & \ddots & \vdots\\ a_{n1} & a_{n2} & \ldots & a_{nn}}=a_{11}+a_{22}+\cdots+a_{nn}=\sum_{i=1}^n a_{ii}\]

For the trace of a matrix, the following applies:

\[\text{tr}(T^{-1}A\,T)=\text{tr}(A)\] for square matrices \(A\) and invertible matrices \(T\) of equal dimension as \(A\).