Definite integrals have been introduced in Section 2.13 as a way to compute the area below the curve corresponding to the graph of a function of one variable. If we consider a function (x, y) of two variables, there is no reason why we should not consider its surface plot and the volume below the surface, corresponding to a region D on the (x, y) plane. This double integral is denoted as follows:
Fig. 3.15 Double integral as a volume below a surface graph of a function.
and the idea is illustrated in Fig. 3.15. Here, the domain on which the function is integrated is a rectangle, but more general shapes are allowed. Nevertheless, rectangular tiles are easy to deal with, and indeed they are the basis for a rigorous definition of the double integral. We will encounter double integrals only when characterizing the joint distribution of two random variables, in Section 8.1, and we just need an intuitive understanding.
To compute a double integral, a convenient way is to regard double integrals as iterated integrals, over a rectangular domain [a, b] × [c, d]:
Please note the order of differentials dy and dx: Here we want to point out the ordering of variables with which the integration is carried out. If we want to be precise, we could write
The idea behind iterated integration is straightforward: We should first integrate with respect to y, treating x as a constant, obtaining a function of x that is then integrated with respect to x.
Example 3.20 Consider function f(x, y) = x2y and the rectangular domain [1, 2] × [−3, 4] obtained by taking the Cartesian product of intervals [1, 2] on the x-axis and [−3, 4] on the y-axis. We want to find the following integral:
In the inner integral, x can be regarded as a constant and, in this case, it can be just taken outside:
Then, we proceed with the outer integral:
The conditions under which a double integral can be tackled as an iterated integral are stated by Fubini’s theorem, and the idea can be generalized to multiple dimensions.
Problems
3.1 Solve the system of linear equations:
using both Gaussian elimination and Cramer’s rule.
3.2 Express the derivative of polynomials as a linear mapping using a matrix.
3.3 Prove that the representation of a vector using a basis is unique.
3.4 Let , and let D be a diagonal matrix in . Prove that the product AD is obtained by multiplying each element in a row of A by the corresponding element in the diagonal of D. Check with
3.5 Unlike usual algebra, in matrix algebra we may have AX = BX, even though A ≠ B and X ≠ 0. Check with
3.6 Consider the matrix H = I − hhT, where h is a column vector in and I is the properly sized identity matrix. Prove that H is orthogonal, provided that hTh = 1. This matrix is known as the Householder matrix.
3.7 Consider the matrix , where is the identity matrix and is a matrix consisting of 1. This matrix is called a centering matrix, since , where x = [x1, x2, …, xn] is a vector of observations. Prove this fact. Also prove that
3.8 Check that the determinant of diagonal and triangular matrices is the product of elements on the diagonal.
3.9 Find the inverse of each of the following matrices
3.10 For a square matrix A, suppose that there is a vector x ≠ 0 such that Ax = 0. Prove that A is singular.
3.11 Prove that hhT − hThI is singular.
3.12 Prove that two orthogonal vectors are linearly independent.
3.13 Show that if λ is an eigenvalue of A, then 1/(1 + λ) is an eigenvalue of (I+A)−1.
3.14 Show that, if the eigenvalues of A are positive, those of A + A−1 are not less than 2.
3.15 Prove that, for a symmetric matrix A, we have
where λk, k = 1, …, n, are the eigenvalues of A.
Leave a Reply