Category: Inferential Statistics

  • Likelihood ratio tests

    We have introduced likelihood functions as a useful tool for parameter estimation. They also play a role in hypothesis testing and lead to the so-called likelihood ratio test (LRT). The test is based on the likelihood ratio statistic A LRT is a test whose rejection region is of the form for a suitable constant c < 1. The rationale…

  • Size and power of a test

    In the elementary theory of hypothesis testing we consider a null hypothesis such as against the alternative Ha : μ ≠ μ0. Given a sample X = (X1,…, Xn), we considered a rejection region C related to two tails of a standard normal or t distribution. In this case, it is quite easy to evaluate the probability of a type I error This is just the probability…

  • SOME MORE HYPOTHESIS TESTING THEORY

    Arguably, one of the most important contributions of the theory of maximum-likelihood estimation is that it provides us with a systematic approach to find estimators. Similar considerations apply to interval estimation and hypothesis testing. We have very simple and intuitive ways of computing confidence intervals and testing hypotheses about the mean of a normal population,…

  • The method of maximum likelihood

    The method of maximum likelihood is an alternative approach to find estimators in a systematic way. Imagine that a random variable X has a PDF characterized by a single parameter θ; we indicate this as fX(x; θ). If we draw a sample of n i.i.d. variables from this distribution, the joint density is just the product of individual PDFs: This is a…

  • The method of moments

    We have already sketched an application of this method in Example 9.34. To state the approach in more generality, let us introduce the sample moment of order k: The sample moment is the sample counterpart of moment mk = E[Xk]. Let us assume that we need an estimate of k parameters θ1, θ2,…, θk. The method of moments relies on the solution of the…

  • Features of point estimators

    We list here a few desirable properties of a point estimator  for a parameter θ. When comparing alternative estimators, we may have to trade off one property for another. We are already familiar with the concept of unbiased estimator. An estimator  is unbiased if We have shown that sample mean is an unbiased estimator of expected value; in Chapter 10 we will show…

  • PARAMETER ESTIMATION

    Introductory treatments of inferential statistics focus on normal populations. In that case, the two parameters characterizing the distribution, μ and σ2, coincide with expected value, the first-order moment, and variance, the second-order central moment. Hence, students might believe that parameter estimation is just about calculating sample means and variances. It is easy to see that this is not…

  • Slutsky’s theorems

    A few useful theorems in calculus allow us to manipulate limits in an intuitive manner and justify the rules for calculating derivatives that we stated back in Section 2.8. For instance, the limit of the product of two sequences is the product of their limits, if they exist; furthermore, if a function g(·) is continuous, than g can be…

  • Almost-sure convergence

    The last type of convergence that we consider is a strong one in the sense that it implies convergence in probability, which in turn implies convergence in distribution. DEFINITION 9.12 (Almost-sure convergence) The sequence of random variables X1, X2,…, converges almost surely to random variable X if for every , the following condition holds: Almost-sure convergence is denoted by . Sometimes,…

  • Convergence in distribution

    We have already met convergence in distribution when dealing with the central limit theorem. DEFINITION 9.11 (Convergence in distribution) The sequence of random variables X1, X2,…, with CDFs Fn(x), converges in distribution to random variable X with CDF F(x) if for all points at which F(x) is continuous. Convergence in distribution can be denoted as When convergence in distribution occurs, we…