Author: haroonkhan

  • What is linear, exactly?

    If we label a model like y = a + bx as linear, no eyebrow should be raised. Now, consider a regression model involving a squared explanatory variable: Is this linear? Actually it is, in terms of the factor that matters most: fitting model coefficients. True, the model is nonlinear in terms of the explanatory variable x, but the actual unknowns when…

  • Alternative approaches for model fitting

    In the least-squares method, we square residuals and solve the corresponding optimization problem analytically. We should wonder what is so special with squared residuals. We might just as well take the absolute values of the residuals and solve Another noteworthy point is that in so doing we are essentially considering average values of squared or…

  • LEAST-SQUARES METHOD

    Consider the data tabulated and depicted in Fig. 10.1. These joint observations are displayed as circles, and a look at the plot suggests the possibility of finding a linear relationship between x and y. A linear law relating the two variables, such as1 can be exploited to understand what drives a social or physical phenomenon, and is one way of exploiting…

  • Introduction

    We take advantage of all the probabilistic and statistical knowledge we have built in the to get into the realm of empirical model building. Models come in many forms, but what we want to do here is finding a relationship between two variables, say, x and y, based on a set of n joint observations (xi, yi), i = 1,…,n.…

  • Likelihood ratio tests

    We have introduced likelihood functions as a useful tool for parameter estimation. They also play a role in hypothesis testing and lead to the so-called likelihood ratio test (LRT). The test is based on the likelihood ratio statistic A LRT is a test whose rejection region is of the form for a suitable constant c < 1. The rationale…

  • Size and power of a test

    In the elementary theory of hypothesis testing we consider a null hypothesis such as against the alternative Ha : μ ≠ μ0. Given a sample X = (X1,…, Xn), we considered a rejection region C related to two tails of a standard normal or t distribution. In this case, it is quite easy to evaluate the probability of a type I error This is just the probability…

  • SOME MORE HYPOTHESIS TESTING THEORY

    Arguably, one of the most important contributions of the theory of maximum-likelihood estimation is that it provides us with a systematic approach to find estimators. Similar considerations apply to interval estimation and hypothesis testing. We have very simple and intuitive ways of computing confidence intervals and testing hypotheses about the mean of a normal population,…

  • The method of maximum likelihood

    The method of maximum likelihood is an alternative approach to find estimators in a systematic way. Imagine that a random variable X has a PDF characterized by a single parameter θ; we indicate this as fX(x; θ). If we draw a sample of n i.i.d. variables from this distribution, the joint density is just the product of individual PDFs: This is a…

  • The method of moments

    We have already sketched an application of this method in Example 9.34. To state the approach in more generality, let us introduce the sample moment of order k: The sample moment is the sample counterpart of moment mk = E[Xk]. Let us assume that we need an estimate of k parameters θ1, θ2,…, θk. The method of moments relies on the solution of the…

  • Features of point estimators

    We list here a few desirable properties of a point estimator  for a parameter θ. When comparing alternative estimators, we may have to trade off one property for another. We are already familiar with the concept of unbiased estimator. An estimator  is unbiased if We have shown that sample mean is an unbiased estimator of expected value; in Chapter 10 we will show…