A detailed coverage of multivariate distributions is beyond the scope but we should at least consider a generalization of normal distribution. A univariate normal distribution is characterized by its expected value μ and by its variance σ2. In the multivariate case, we have a vector of expected values μ and a covariance matrix Σ. We consider a random vector taking values in :
We say that X has jointly normal or multivariate normal distribution if its joint density is given by
where exp(·) is the exponential function and | Σ | is the determinant of the covariance matrix. This expression may look a bit intimidating, but it is easy to see that, for n = 1, it boils down to the familiar density of a univariate normal. The notation X ∼ N((μ, Σ) is used to refer to a multivariate normal variable.
To get a better feeling for the multivariate normal density, it may be instructive to write it down more explicitely for a bivariate case. We have
Let us write the determinant explicitly:
The inverse of the covariance matrix is6
and a few calculations yield
where
Clearly, z1 and z2 represent standardizations of variables x1 and x2.
What is the shape of this density function? If the two variables were uncorrelated, i.e., if ρ = 0, the level curves of the density function would just be concentric circles in terms of the standardized variables. In terms of the original variables, we would have a set of concentric ellipses, with a horizontal axis and a vertical axis. The effect of correlation is to rotate the ellipses. Figure 8.4 shows a surface plot of the density function of a bivariate normal with μ1 = μ2 = 5, σ1 = σ2 = 6, and ρ = 0.6. We see the familiar bell shape, but it is the contour plot of Fig. 8.5 that illustrates the effect of positive correlation.
Fig. 8.4 Surface plot of a joint normal PDF.
Let us check what happens in the density above if we set ρ = 0, destandardizing zi to get a clearer picture:
We see that the joint density can be factored into the product of two marginal densities, which are themselves normal. Indeed, the following theorem holds in general.
THEOREM 8.9 If X is a vector of jointly normal random variables and if they are pairwise uncorrelated, then they are independent.
The theorem states that for jointly normal variables, lack of correlation implies independence. We noted that independence implies lack of correlation, but the converse is not true in general (see Example 8.4). The multivariate normal is a significant exception.
Fig. 8.5 Contour plot of a joint normal PDF.
Leave a Reply