A few useful theorems in calculus allow us to manipulate limits in an intuitive manner and justify the rules for calculating derivatives that we stated back in Section 2.8. For instance, the limit of the product of two sequences is the product of their limits, if they exist; furthermore, if a function g(·) is continuous, than g can be interchanged with the limit operator, i.e.
These properties are generalized to stochastic limits by a group of results called Slutsky’s theorems.
THEOREM 9.14 For a continuous function g(·), we have
The practical implication of this theorem is that if we are able to find a consistent estimator of a parameter, we also have a consistent estimator of a function of that parameter.
THEOREM 9.15 If the sequence of random variables Xn converges in distribution to random variable X, i.e., , and the sequence Yn converges in probability to a constant c, i.e., plimn→∞ Yn = c, then
Example 9.33 Standard procedures for confidence intervals and hypothesis testing are derived assuming a normal sample, but they are often applied to nonnormal populations. We should make sure that this makes sense, at least for large samples. The central limit theorem implies that, if is the sample mean of a sequence of i.i.d. variables, the statistic
has an approximately standard normal distribution. However, this result assumes knowledge of σ; if we replace σ by the sample standard deviation S, things are not obvious at all. We typically resort to the t distribution, but this again assumes a normal sample. This is where Slutsky’s theorems come in handy. Let us consider the statistic
and rewrite it as follows:
The numerator converges in distribution to a standard normal, courtesy of the central limit theorem:
As to the denominator, it can be shown that sample variance S2 is a consistent estimator of variance σ2, i.e., . Then, using Slutsky’s theorems, we see that
and therefore
Leave a Reply