As a last approach, we consider Cramer’s rule, which is a handy way to solve systems of two or three equations. The theory behind it requires more advanced concepts, such as matrices and their determinants, which are introduced below. We anticipate here a few concepts so that readers not interested in advanced multivariate statistics can skip the rest of the any loss of continuity.
In the previous section, we have seen that Gaussian elimination is best carried out on a tabular representation of the system of equations. Consider a system of two equations in two variables:
We can group the coefficients and the right-hand sides into the two tables below:
These “tables” are two examples of a matrix; A is a two-row, two-column matrix, whereas b has just one column. Usually, we refer to b as a column vector (see next section).
The determinant is a function mapping a square matrix, i.e., a matrix with the same number of rows and columns, to a number. The determinant of matrix A is denoted by det(A) or |A|. The concept is not quite trivial, but computing the determinant for a 2 × 2 matrix is easy. For the matrix above, we have
In practice, we multiply numbers on the main diagonal of the matrix and we subtract the product of numbers on the other diagonal. Let us denote by Bi the matrix obtained by substituting column i of matrix A with the vector b of the right-hand sides. Cramer’s rule says that the solution of a system of linear equations (if it exists! is obtained by computing:
Example 3.3 To illustrate, let us consider system (3.4) again. We need the following determinants:
Applying Cramer’s rule yields
which is, of course, what we obtained by substitution of variables.
The case of a 3 × 3 matrix is dealt with in a similar way, by computing the determinant as follows:
In general, the calculation of an n × n determinant is recursively boiled down to the calculation of smaller determinants.
Example 3.4 To illustrate the calculation of the determinant in a three-dimensional case, let us apply the formula to the matrix of coefficients in Example 3.2 above:
The reader is encouraged to compute all of the determinants, such as
that we require to apply Cramer’s rule and check that we obtain the same solution as in Example 3.2.9
We will generalize this rule and say much more about the determinant in Section 3.6. We will also see that, although computing the determinant is feasible for any square matrix, this quickly becomes a cumbersome approach. Indeed, determinants are a fundamental tool from a conceptual perspective, but they are not that handy computationally. Still, it will be clear from Section 3.6 how important they are in linear algebra.
For now, we should just wonder if anything may go wrong with the idea. Indeed, Cramer’s rule may crumble down if det(A) = 0. We will see that in such a case the matrix A suffers from some fundamental issues, preventing us from being able to solve the system. In general, as we have already pointed out, we should not take for granted that there is a unique solution to a system of linear equations. There could be none, or there could be many (actually, infinite). To investigate these issues, we need to introduce powerful and far-reaching mathematical concepts related to vectors, matrices, and linear algebra.
Fig. 3.4 Vectors in two- and three-dimensional spaces.
Leave a Reply