In this section we explore the link between a basis of a linear space and the possibility of finding a unique solution of a system of linear equations Ax = b, where , , and . Here, n is the number of variables and m is the number of equations; in most cases, we have m = n, but we may try to generalize a bit.
Recall that our system of equations can be rephrased as
where aj, j = 1, …, n, are the columns of A. We can find a solution only if b is in the linear subspace spanned by the columns of A. Please note that, in order to find a solution, we are not necessarily requiring that the columns of A be linearly independent. This cannot happen if, for instance, we have m = 3 equations and n = 5 variables, because in a space of dimension 3 we cannot find five linearly independent vectors; indeed, the solution need not be unique in such a case. On the other hand, if we have m = 5 equations and n = 3 variables, we will not find a solution in general, unless we are lucky and vector b lies in the subspace (of dimension ≤ 3) spanned by those three columns.
If there is a solution, when is it unique? By now, the reader should not be surprised to learn that this happens when the columns of the matrix are linearly independent, as this imply that there is at most one way to express b using them.
If we consider a square system, i.e., m = n, we are sure to find a unique solution of the system if the columns of the matrix are linearly independent. In such a case, the n columns form a basis for and we are able to solve the system for any right-hand side b.
Given this discussion, it is no surprise that the number of linearly independent columns of a matrix is an important feature.
DEFINITION 3.6 (Column–row rank of a matrix) Given a matrix , its column and row ranks are the number of linearly independent columns and the number of linearly independent rows of A, respectively.
A somewhat surprising fact is that the row and the column rank of a matrix are the same. Hence, we may simply speak of the rank of a matrix , which is denote by rank(A). It is also obvious that rank(A) ≤ min{m, n}. When we have rank(A) = min{m, n}, we say that the matrix A is full-rank. In the case of a square matrix , the matrix has full rank if there are n linearly independent columns (and rows as well).
is not full rank, as it is easy to check that the third column is a linear combination of the first two columns: a3 = 3a1 + 2a2. Since there are only two linearly independent columns, its rank is 2. The ranks is exactly the same if we look at rows. In fact, the third row can be obtained by adding twice the first row and minus half the second row.
The matrix
has rank 2, since it is easy to see that the rows are linearly independent. We say that this matrix has full row rank, as its rank is the maximum that it could achieve.
We have said that a square system of linear equations Ax = b essentially calls for the expression of the right-hand side vector b as a linear combination of the columns of A. If these n columns are linearly independent, then they are a basis for and the system can be solved for any vector b. We have also noted that the solution can be expressed in terms of the inverse matrix, x = A−1b, if it exists. Indeed, the following theorem links invertibility and rank of a square matrix.
THEOREM 3.7 A square matrix is invertible (nonsingular) if and only if rank(A) = n. If the matrix is not full-rank, it is not invertible and is called singular.
The rank of a matrix is a powerful concept, but so far we have not really found a systematic way of evaluating it. Actually, Gaussian elimination would do the trick, but what we really need is a handy way to check whether a square matrix has full rank. The determinant provides us with the answers we need.
Leave a Reply