Linear Algebra: Linear Algebra
Orthogonal Projection and Orthonormal Bases
Motivation When dealing with vectors, often we wish to know how much is one vector composed of the other. This description fits well with the properties of the dot product. An intuitive way to measure this is by using an orthogonal projection. For example, let’s imagine we have two vectors, a vector and a vector , and we wish to project the vector onto . We can visualize the orthogonal projection in the following way. First, we turn on a light and point it perpendicular to the vector we are projecting on (vector ) and place it behind the vector we are projecting (vector ). After turning on the light, vector will cast a shadow onto vector , and this shadow corresponds to the projection. An example of the projection can be seen in the figure below; here we can think of the green vector as the aforementioned shadow.
Projections
![]() |
A visualization of the orthogonal projection of the vector onto the vector , denoted as . The black dashed line visually represents the projection operation. |
Formally, a projection on a vector space is a linear operator such that . In other words, projections are operators which do not do anything new if applied more than once. In the context of the orthogonal projection shown in the figure above, once we project the vector onto performing another projection will yield the same vector .
Orthogonal projection This definition of a projection is very general, and orthogonal projections are only a subset of possible projections. Namely, orthogonal projections are operators which satisfy the property . Orthogonal projections also do not have to be projections of vectors onto another vector. For example, we could project a vector to the -plane using the following operator:
Exercise Verify that the operator
Orthogonal projection on a vector However, in most cases, we will be interested in orthogonal projections onto vectors. As motivated at the beginning of the section, we denote the projection of the vector onto vector as . The projection itself is a vector pointing in the same direction as the vector whose length is the ratio of the vector which is parallel to the vector . For this reason, we can also denote the projection as , in which the symbol denotes that this is the component of the vector which is parallel to . From trigonometry, we know that the magnitude of the projection is equal to
Orthogonal projection on a unit vector In most cases, however, we will project onto unit vectors (vectors whose magnitude is equal to ), as it allows for more neat calculations. We can achieve that every vector has a unit length, and this is called normalization. In order to normalize a vector, we need to divide it by its magnitude. If we denote the normalized version of the vector as , then the orthogonal projection onto the unit vector can be rewritten as
Exercise We have discussed that the orthogonal projection is a linear operator, and we know that linear operators can be written matrix form. As an exercise, verify that the orthogonal projection of an arbitrary vector onto a normalized vector can be written as:
Orthonormal basis
Orthonormal basis In linear algebra, an orthonormal basis is a special type of basis that has two important properties: all vectors have unit lengths, and all basis vectors are perpendicular to each other. The name orthonormal means orthogonal and normalized at the same time.
A vector as a linear combination of an orthonormal basis Formally, let’s denote the orthonormal basis in as a set . Then, these properties can be written compactly as:
Summary In linear algebra, an orthogonal projection measures how much one vector is composed of another. An orthogonal projection is a linear operator that projects vectors onto a subspace, which does not do anything new if applied more than once. An orthonormal basis is a special type of basis that has two important properties: all vectors have unit lengths, and all basis vectors are perpendicular to each other. Orthonormal bases provide a consistent way to represent vectors, and allow for simpler calculations in certain cases. For example, the coefficients in the expansion of an arbitrary vector in an orthonormal basis are equal to the dot product of the vector with its corresponding basis vector.