Under certain conditions we can calculate with matrices. Here we discuss addition and scalar multiplication. We also look at reflection about the main diagonal.
If \(A\) and \(B\) are matrices of the same size, then the sum matrix \(A+B\) is the matrix that you will get by adding corresponding elements. Just as for numbers, the operation is called addition.
Written out in coordinates this definition reads as follows:
Let \(A\) and \(B\) both be \(m\times n\) matrices with elements \(a_{ij}\) and \(b_{ij}\), respectively. For \( 1\leq i\leq m\) and \(1\leq j\leq n\) define \[
c_{ij}=a_{ij}+b_{ij}
\] The \(m\times n\) matrix \(C\) with elements \(c_{ij}\) is the sum of the matrices \(A\) and \(B\).
Below are two examples of matrix sums.
\[\begin{aligned}\matrix{0 & 2 & 1 \\ 0 & 2 & 1 \\ 4 & 1 & 2 \\ } + \matrix{2 & 2 & 4 \\ 2 & 1 & 3 \\ 3 & 1 & 5 \\ } &= \matrix{0+2 & 2+2 & 1+4 \\ 0+2 & 2+1 & 1+3 \\ 4+3 & 1+1 & 2+5 \\ } \\ \\ &= \matrix{2 & 4 & 5 \\ 2 & 3 & 4 \\ 7 & 2 & 7 \\ }\end{aligned}\] \[\matrix{0 & 2 & 1 \\ 0 & 2 & 1 \\ 4 & 1 & 2 \\ } + \matrix{2 & 2 \\ 2 & 1 \\ 3 & 1 \\ } \text{does not exist because the matrix sizes differ.}\]
Addition of matrices satisfies the following two properties, where \(A=(a_{ij})\), \(B=(b_{ij})\) and \(C=(c_{ij})\) are three \(m\times n\) matrices:
\[
\begin{array}{ll}
A+B=B+A & (\textit{commutativity}) \\
(A+B)+C=A+(B+C) & (\textit{associativity})
\end{array}
\]
Both rules are a direct result of the same rules for numbers:
- The first property, commutativity for the addition of matrices, follows directly from the fact that the simple addition is commutative: if #c_{ij}=a_{ij}+b_{ij}#, then also #c_{ij}=b_{ij}+a_{ij}#, so \[A+B = (a_{ij})+(b_{ij})=(c_{ij}) = (b_{ij}) + (a_{ij}) = B+A\]
- The second property, associativity for the addition of matrices, we establish as follows. First we observe that \(A+B\) and \(B+C\) and hence #(A+B)+C# and #A+(B+C)# have equal size, namely \(\rv{m, n}\). Next, we observe that at position \(ij\) the matrix \((A+B)+C\) has the element \[((A+B)+C)_{ij}=(A+B)_{ij} +c_{ij}=(a_{ij}+b_{ij})+c_{ij}\] and that the matrix \(A+(B+C)\) has the element \(a_{ij}+(b_{ij}+c_{ij})\); of course, these two numbers are equal.
The associativity enables us to talk about \( A+B+C\) without specifying how we determine the matrix: as \((A+B)+C\) or as \(A+(B+C)\). The result is the same either way.
If \(A\) is a matrix and \(\lambda\) a number, then \(\lambda\cdot A\) or simply #\lambda A#, the matrix you get by multiplying all the elements of \(A\) by \(\lambda\). We call this operation the scalar multiplication of the scalar #\lambda# by the matrix #A# and the result the scalar product.
If #\lambda = -1#, we often write #-A# rather than #-1 A#. This matrix is called the opposite matrix of #A#.
Below is an example of a scalar multiplication of a matrix with a number.
\[\begin{aligned} -4 \matrix{5 & 3 \\ 4 & 1 \\ } &= \matrix{\left(-4\right)\cdot 5 & \left(-4\right)\cdot 3 \\ \left(-4
\right)\cdot 4 & \left(-4\right)\cdot 1 \\ }\\ \\ &= \matrix{-20 & -12 \\ -16 & -4 \\ }\end{aligned}\]
The below calculation rules apply for scalar multiplication.
If \(A\) and \(B\) matrices are of equal size, and \(\lambda\) and \(\mu\) are scalars, then:
\[
\begin{array}{rl}
1\,A\!\!\! & =A \\
(\lambda+\mu)\,A\!\!\! &= \lambda\, A+\mu\, A \\
\lambda\,(A+B)\!\!\! &= \lambda A+\lambda B \\
\lambda(\mu\, A)\!\!\! &= (\lambda\, \mu)\, A
\end{array}
\]
The operations addition and scalar multiplication for matrices both relate to each element of the matrix, where for each index the well-known multiplication by a number, and the well-known sum of numbers, respectively, takes place. Thus, the rules are not so different from the corresponding well-known calculation rules for numbers.
For vectors, we have defined a scalar multiplication. If #A# is a #1\times n# matrix (a row vector of length #n#) or an #m\times 1# matrix (a column vector of length #m#), then the scalar multiplication of #\lambda# by #A# corresponds to the scalar multiplication of vectors.
The transposed matrix of \(A\), denoted as \(A^{\top}\), is the matrix that you get when you reflect \(A\) about its main diagonal. If \(A\) is an \(m\times n\) matrix, then \(A^{\top}\) is an \(n\times m\) matrix. Other commonly used ways of denoting the transposed matrix of #A# are \(A^{t}\) and \(A'\).
Below are two examples of transposed matrices.
\[\matrix{0 & 4 & 4 \\ 1 & 0 & 4 \\ }^{\!\top} = \matrix{0 & 1 \\ 4 & 0 \\ 4 & 4 \\ }\qquad\text{and}\qquad \matrix{0 \\ 4 \\ }^{\!\top}=\matrix{0 & 4 \\ }\]
For matrices \(A\) and \(B\) of equal size and each scalar \(\lambda\) we have \[
\begin{array}{rl}
(A+B)^{\top}\!\!\! & = A^{\top}+B^{\top} \\
(\lambda A)^{\top}\!\!\! & = \lambda A^{\top} \\
(A^{\top})^{\top}\!\!\! & =A
\end{array}
\]
The #(i,j)#-element of #\lambda A# is #\lambda\cdot a_{ij}#, so the #(i,j)#-element of #\left(\lambda A\right)^{top}# is #\lambda\cdot a_{ji}#, but that is the #(i,j)#-element of #\lambda A^{\top}#, so #\left(\lambda A\right)^{top}=\lambda A^{\top}#.
The proofs of the other rules are similar.
A symmetric matrix is a square matrix that is equal to its transpose. An antisymmetric matrix (also called a skew-symmetric matrix) is a square matrix that is opposite to its transpose.
In other words, a matrix \(A\) is symmetric if \(A^{\top}=A\), and a matrix \(A\) is anti-symmetric if \(A^{\top}=-A\)
Below are two examples: a symmetric matrix, and an antisymmetric matrix.
\(\matrix{4 & 6 & 2 \\ 6 & 2 & 3 \\ 2 & 3 & 10 \\ }\) is a symmetric matrix because of \(\matrix{4 & 6 & 2 \\ 6 & 2 & 3 \\ 2 & 3 & 10 \\ }^{\!\top} = \matrix{4 & 6 & 2 \\ 6 & 2 & 3 \\ 2 & 3 & 10 \\ }\).
\(\matrix{0 & -4 & 2 \\ 4 & 0 & -1 \\ -2 & 1 & 0 \\ }\) is an antisymmetric matrix because \(\matrix{0 & -4 & 2 \\ 4 & 0 & -1 \\ -2 & 1 & 0 \\ }^{\!\top} =\matrix{0 & 4 & -2 \\ -4 & 0 & 1 \\ 2 & -1 & 0 \\ }=-\matrix{0 & -4 & 2 \\ 4 & 0 & -1 \\ -2 & 1 & 0 \\ }\).