Operations on matrices are defined much along the lines used for vectors. In the following, we will denote a generic element of a matrix by a lowercase letter with two subscripts; the first one refers to the row, and the second one refers to the column. So, element aij is the element in row i, column j.
Addition Just like vectors, matrices can be added elementwise, provided they are of the same size. If and , then
For example
Multiplication by a scalar Given a scalar , we define its product with a matrix as follows:
For example
Matrix multiplication We have already seen that elementwise multiplication with vectors does not lead us to an operation with good and interesting properties, whereas the inner product is a rich concept. Matrix multiplication is even trickier, and it is not defined elementwise, either. The matrix product AB is defined only when the number of columns of A and the number of rows of B are the same, i.e., and . The result is a matrix, say, C, with m rows and n columns, whose element cij is the inner product of the ith row of A and the jth column of B:
For instance
If two matrices are of compatible dimension, in the sense that they can be multiplied using the rules above, they are said to be conformable. The definition of matrix multiplication does look somewhat odd at first. Still, we may start getting a better felling for it, if we notice how we use it to press a system of linear equations in a very compact form. Consider the system
and group coefficients aij into matrix , and right-hand sides bi into column vector . Using the definition of matrix multiplication, we may rewrite the system as follows:
By repeated application of matrix product, we also define the power of a matrix. Squaring a matrix A does not mean squaring each element, but rather multiplying the matrix by itself:
It is easy to see that this definition makes sense only if the number of rows is the same as the number of columns, i.e., if matrix A is a square matrix. By the same token, we may define a generic power:
We may even define the square root of a matrix as a matrix A1/2 such that:
The existence and uniqueness of the square root matrix are not to be taken for granted. We will discuss this further when dealing with multivariate statistics.
A last important observation concerns commutativity. In the scalar case, we know that inverting the order of factors does not change the result of a multiplication: ab = ba. This does not apply to matrices in general. To begin with, when multiplying two rectangular matrices, the number of rows and column may match in such a way that AB is defined, but BA is not. But even in the case of two square matrices, commutativity is not ensured, as the following counterexample shows:
Matrix transposition We have already met transposition as a way of transforming a column vector into a row vector (and vice versa). More generally, matrix transposition entails interchanging rows and columns: Let B = AT be the transposed of A; then bij = aji. For instance
So, if , then .
If A is a square matrix, it may happen that A = AT; in such a case we say that A is a symmetric matrix. The following is an example of a symmetric matrix:
Symmetry should not be regarded as a peculiar accident of rather odd matrices; in applications, we often encounter matrices that are symmetric by nature.15 Finally, it may be worth noting that transposition offers another way to denote the inner product of two vectors:
By the same token
The following is an important property concerning transposition and product of matrices.
PROPERTY 3.1 (Transposition of a matrix product) If A and B are conformable matrices, then
We encourage the reader to check the result for the matrices of Eq. (3.9).
Example 3.8 An immediate consequence of the last property is that, for any matrix A, the matrices ATA and AAT are symmetric:
Note that, in general, the matrices ATA and AAT have different dimensions.
Leave a Reply