Linear Algebra: Linear Algebra
Determinants
In mathematics, a determinant is a scalar value that is a function of a matrix, denoted as or . Intuitively, we can think of a determinant as a measure of how much the unit area enclosed by original vectors changes under a transformation. To visualize this, let’s assume that we have a transformation of the following form:
In this simple example, we were dealing with a matrix, so we were calculating the areas of parallellograms. For example, in the case of a matrix, we would be calculating the ratio of volumes of the parallellepipedums spanned by vectors, and generally, for an matrix, we would calculate the -dimensional volume. Now, let’s come up with some constraints which naturally follow from the intuition provided above.
Firstly, we can see that determinants can only be defined for square matrices. In order to justify this claim, let’s think of how non-square act on vectors. Let’s imagine we have a (i.e. we have an matrix). This means that the input vector is -dimensional, while the output vector is -dimensional. If we wish to calculate the ratio of volumes, we can very quickly see that we come to a problem. In the original space, we had an -dimensional volume, while in the output space, we have an -dimensional volume, and we cannot calculate the ratio of the two because we are dealing with different objects. For example, if and , then the determinant would be the ratio of a volume and an area, which does not make sense. So, the first constraint is that only square matrices have a determinant.
Secondly, not only that the matrix has to be a square matrix, but it also has to be invertible in order to get a nonzero determinant. To see why this is the case, let’s assume that the matrix isn’t invertible. As we know, a property of a matrix to be invertible implies that there is a transformation such that , i.e. there exists a matrix which uniquely undoes the transformation . Visually, if a matrix is not invertible, then this implies that some vectors are mapped to the same axis in the output space, i.e. we cannot disentangle them. For example, imagine a case where we have a matrix that maps two basis vectors to the same axis (i.e. they are colinear). This mapping then isn’t invertible, because we cannot distinguish input vectors that were mapped to the same axis in the output space, so we cannot write a unique way to map output vectors back to the input space. We can visualize this as having a box, and then we fold so it “crumbles” into a 2D-shaped object. Although this is mapping from 3D space to 3D space, two dimensions are the same, so we effectively have a 3D to 2D mapping, and we know that in this case, we cannot calculate the determinant. Although this condition can be shown more formally, this is the main intuition behind it. To summarize, the second constraint is that the matrix whose determinant we wish to calculate has to be invertible.
We have shown a formula for calculating determinants of square invertible matrices, but how would we generalize this for matrices? There is a general formula is quite elaborate, however, in practice, you will almost never calculate determinants of matrices by hand (only in theoretical work)
Summary In mathematics, the determinant of a matrix is a scalar value that measures how much the unit area (or volume) spanned by the original vectors changes under a transformation. Furthermore, we have intuitively explained why it can only be defined for square matrices. The formula for the determinant of an arbitrary matrix is quite elaborate, but in practice, it is usually calculated via some numerical libraries.