We are quite used to elementary operations on numbers, such as addition, multiplication, and division. Not all of them can be sensibly extended to the vector case. Still, we will find some of them quite useful, namely:
- Vector addition
- Multiplication by a scalar
- Inner product
Vector addition Addition is defined for pairs of vectors having the same dimension. If , we define:
For instance
Since vector addition boils down to elementwise addition, it enjoys commutativity and associativity:
and we may add an arbitrary number of vectors by just summing up their elements.
We will denote by 0 (note the boldface character) a vector whose elements are all 0; sometimes, we might make notation a bit clearer by making the number of elements explicit, as in 0n. Of course, u + (−u) = 0.
Multiplication by a scalar Given a scalar α ∈ and a vector , we define the product of a vector and a scalar:
For instance
Again, familiar properties of multiplication carry over to multiplication by a scalar.
Now, what about defining a product between vectors, provided that they are of the same dimension? It is tempting to think of componentwise vector multiplication. Unfortunately, this idea does not lead to an operation with properties similar to ordinary products between scalars. For instance, if we multiply two nonzero numbers, we get a nonzero number. This is not the case for vectors. To see why, consider vectors
If we multiply them componentwise, we get the two-dimensional zero vector 02. This, by the way, does not allow to define division in a sensible way. Nevertheless, we may define a useful concept of product between vectors, the inner product.10
Inner product The inner product is defined by multiplying vectors componentwise and summing the resulting terms:
Since the inner product is denoted by a dot, it is also known as the dot product; You might also find a notation like . The inner product yields a scalar.11 Clearly, an inner product is defined only for vectors of the same dimension. For instance
One could wonder why such a product should be useful at all. In fact, the inner-product concept is one of the more pervasive concepts in mathematics and it has far-reaching consequences, quite beyond the scope of this book. Nevertheless, a few geometric examples suggest why the inner product is so useful.
Example 3.5 (Euclidean length, norm, and distance) The inner-product concept is related to the concept of vector length. We know from elementary geometry that the length of a vector is the square root of the sum of squared elements. Consider the two-dimensional vector v = [2, −1]T, depicted in Fig. 3.4. Its length (in the Euclidean sense) is just . We may generalize the idea to n dimensions by defining the Euclidean norm of a vector:
The Euclidean norm is a function mapping vectors to nonnegative real numbers, built on the inner product. Alternative definitions of norm can be proposed, provided that they preserve the intuitive properties associated with vector length. By the same token, we may consider the distance between two points. Referring again to a two-dimensional case, if we are given two points in plane
their Euclidean distance is
This can be related to the norm and the inner product:
In such a case, we are measuring distance by assigning the same importance to each dimension (component of the difference vector). Later, we will generalize distance by allowing for a vector of weights w, such as in
Fig. 3.5 Orthogonal vectors illustrating the Pythagorean theorem.
A vector e such that is called a unit vector. Given a vector v, we may obtain a unit vector parallel to v by considering vector .
Example 3.6 (Orthogonal vectors) Consider the following vectors:
They are unit-length vectors parallel to the two coordinate axes, and they are orthogonal. We immediately see that
The same thing happens, for instance, with v1 = [1, 1]T and v2 = [−1, 1]T. These vectors are orthogonal as well (please draw them and check).
Orthogonality is not an intuitive concept in an n-dimensional space, but in fact we may define orthogonality in terms of the inner product: We say that two vectors are orthogonal if their inner product is zero.
Example 3.7 (Orthogonal projection) From elementary geometry, we know that if two vectors u and v are orthogonal, then we must have
This is simply the Pythagorean theorem in disguise, as illustrated in Fig. 3.5. Now consider two vectors u and v that are not orthogonal, as illustrated in Fig. 3.6. We may decompose u in the sum of two vectors, say, u1 and u2, such that u1 is parallel to v and u2 is orthogonal to v. The vector u1 is the orthogonal projection of u on v. Basically, we are decomposing the “information” in u into the sum of two components. One contains the same information as v, whereas the other one is completely “independent.”
Fig. 3.6 Orthogonal projection of vector u on vector v.
Since u1 is parallel to v, for some scalar α we have:
The first equality derives from the fact that two parallel vectors must be related by a proportionality constant; the second one states that u is the sum of two component vectors. What is the right value of α that makes them orthogonal? Let us apply the Pythagorean theorem:
This, in turn, implies
If v is a unit vector, we see that the inner product gives the length of the projection of u on v. We also see that if u and v are orthogonal, then this projection is null; in some sense, the “information” in u is independent from what is provided by v.
Leave a Reply