Author: haroonkhan
-
Geometric distribution
The geometric distribution is a generalization of the Bernoulli random variable. The underlying conceptual mechanism is the same, but the idea now is repeating identical and independent Bernoulli trials until we get the first success. The number of experiments needed to stop the sequence is a random variable X, with unbounded support 1, 2, 3,…. Finding its PMF…
-
Bernoulli distribution
The Bernoulli distribution is based on the idea of carrying out a random experiment, which may result in a success or a failure. Let p be the probability of success; then, 1 − p is the probability of failure. If we assign the value 1 to variable X in case of success, and 0 otherwise, we get the following PMF: It…
-
Discrete uniform distribution
The uniform distribution is arguably the simplest model of uncertainty, as it assigns the same probability to each outcome: This makes sense only if there is a finite number n of possible values that the random variable can assume. If they are consecutive integer numbers, we have an integer uniform distribution, which is characterized by the lower…
-
Empirical distributions
Empirical distributions feature the closest link with descriptive statistics, since their PMF is typically estimated by collecting empirical relative frequencies. For instance, if we consider a sample of 10 observations of a random variable X, and X = 1 occurs in three cases, X = 2 in five cases, and X = 3 occurs twice, we may estimate Empirical distributions feature the largest…
-
A FEW USEFUL DISCRETE DISTRIBUTIONS
There is a wide family of discrete probability distributions, which we cannot cover exhaustively. Nevertheless, we may get acquainted with the essential ones, which will be illustrated by a few examples. First, we should draw the line between empirical and theoretical distributions. Since these terms may be a tad misleading, it is important to clarify…
-
Properties of variance
The first thing we should observe is that variance cannot be negative, as it is the expected value of a squared deviation. It is zero for a random variable that is not random at all, i.e., a constant. In doing calculations, the following identity is quite useful: This is the analog of Eqs. (4.5) and (4.6) in…
-
VARIANCE AND STANDARD DEVIATION
The expected value of a random variable tells us something about the location of its distribution, but we need a characterization of dispersion and risk as well. In descriptive statistics, we consider squared deviations with respect to the mean. Here we do basically the same thing, with respect to the expected value. DEFINITION 6.9 (Variance…
-
Expected value of a function of a random variable
Typically, a random variable is just a risk factor that will affect some managerially more relevant outcome linked to cost or profit. This link may be represented by a function; hence, we are interested in functions of random variables. Given a random variable X and a function, like g(x) = x2, or g(x) = max{x, 0}, we define a new…
-
Properties of expectation
We may think of the expected value as an operator mapping a random variable X into its expected value μ = E[X]. The expectation operator enjoys two very useful properties. PROPERTY 6.6 (Linearity of expectation 1) Given a random variable X with expected value E[X], we have for any numbers α and β. This property is fairly easy to prove: Informally, the…
-
Expected value vs. mean
Looking at Definition 6.3, the similarity with how the sample mean is calculated in descriptive statistics, based on relative frequencies, is obvious. However, there are a few differences that we must always keep in mind. This is why it is definitely advisable to avoid the term “mean” altogether, when referring to random variables. Using the term…