Author: haroonkhan
-
PARAMETER ESTIMATION
Introductory treatments of inferential statistics focus on normal populations. In that case, the two parameters characterizing the distribution, μ and σ2, coincide with expected value, the first-order moment, and variance, the second-order central moment. Hence, students might believe that parameter estimation is just about calculating sample means and variances. It is easy to see that this is not…
-
Slutsky’s theorems
A few useful theorems in calculus allow us to manipulate limits in an intuitive manner and justify the rules for calculating derivatives that we stated back in Section 2.8. For instance, the limit of the product of two sequences is the product of their limits, if they exist; furthermore, if a function g(·) is continuous, than g can be…
-
Almost-sure convergence
The last type of convergence that we consider is a strong one in the sense that it implies convergence in probability, which in turn implies convergence in distribution. DEFINITION 9.12 (Almost-sure convergence) The sequence of random variables X1, X2,…, converges almost surely to random variable X if for every , the following condition holds: Almost-sure convergence is denoted by . Sometimes,…
-
Convergence in distribution
We have already met convergence in distribution when dealing with the central limit theorem. DEFINITION 9.11 (Convergence in distribution) The sequence of random variables X1, X2,…, with CDFs Fn(x), converges in distribution to random variable X with CDF F(x) if for all points at which F(x) is continuous. Convergence in distribution can be denoted as When convergence in distribution occurs, we…
-
Convergence in quadratic mean
Consider the ordinary limits of expected value and variance of the sample mean: When variance goes to zero like this, intuition suggests that the probability mass gets concentrated and some kind of convergence occurs. DEFINITION 9.9 (Convergence in quadratic mean to a number) If E[Xn] = μn and , and the ordinary limits of the sequence of expected values and variances…
-
Convergence in probability
The first stochastic convergence concept that we illustrate is not the strongest one, but it can be easier to grasp. DEFINITION 9.7 (Convergence in probability) A sequence of random variables, X1, X2,…, converges in probability to a random variable X if for every The definition may look intimidating, but it is actually intuitive: Xn tends to X if, for an arbitrarily small ,…
-
STOCHASTIC CONVERGENCE AND THE LAW OF LARGE NUMBERS
In this section we start considering in some more depth the issues involved in inferential statistics. The aim is to bridge the gap between the elementary treatment that is commonly found in business-oriented textbooks, and higher-level books geared toward mathematical statistics. As we said, most readers can safely skip these sections. Others can just have…
-
Random-number generation
Any Monte Carlo approach relies on the ability of generating variables that look reasonably random. Clearly, no computer algorithm can be truly random, but all we need is a way of generating pseudorandom variables that would trick statistical tests into believing that they are truly random. The starting point of any such strategy is the…
-
Discrete-event vs. discrete-time simulation
The time we experience in everyday life is continuous. Engineers simulating, e.g., the flight behavior of an aircraft, have to build a continuous-time model accounting for quite complex dynamics. To make the model amenable to numerical simulation, suitable discretization schemes have to be devised; indeed, nothing is continuous in the digital world of computers. The way…
-
MONTE CARLO SIMULATION
Monte Carlo simulation is a widely used tool in countless branches of physics, engineering, economics, finance, and business in general. Roughly speaking, the aim is to simulate a system on a computer, in order to evaluate its performance under random scenarios. The name was actually invented by physicists and aptly reflects the role of randomness.…