In Section 7.9 we introduced stochastic processes as sequences of random variables; assuming a discrete-time stochastic process, we have a sequence of the form Xt, t = 1,2,3,…. We have also pointed out that we cannot characterize a stochastic process in terms of the marginal distributions of each variable Xt. In principle, we should assign the joint distribution of all the involved variables: a daunting task, indeed. For this reason, whenever it is practically acceptable, we work with processes in which mutual dependence among random variables is at least limited to a simple structure, if not absent at all. The Poisson process, thanks to the memory less property of the exponential distribution, is a simple case. Another class of relatively simple, yet practically relevant processes features a “limited” amount of memory, which is reflected by a manageable dependence structure.
DEFINITION 8.11 (Discrete-time Markov processes) Consider a discrete-time stochastic process Xt, t = 1,2,3,…. If the condition
holds for any value of the time index t, the process is called a Markov process.
We see that a Markov process features a limited amount of memory, as the only relevant past observation in the conditional expectation above is the last observation. A simple example of Markov process occurs when the set of possible values of Xt is a finite set. We speak in such a case of a discrete-time Markov chain.
Example 8.10 Financial markets are characterized by volatility, which is linked to the standard deviation of returns. One interesting feature of volatility is that we observe periods of relative calm, in which volatility is reasonable, followed by periods of nervousness, where volatility is quite large. Imagine that we want to build a model in which markets can be in one of two states, low and high; the time bucket we consider is a single trading day. High volatility tends to persist; hence, we cannot just assign a probability that, on one day, markets will be in one of the two states. We should build a regime-switching model, accounting for the fact that each state tend to persist: After a day of high volatility, we are more likely to observe another day of high volatility; the same holds for a day with low volatility. A naive regime-switching model is illustrated in Fig. 8.7. The idea is that if the last day was in the low state, the next day will feature the same level of volatility with probability 0.8. However, there is a probability 0.2 that we will observe a day with high volatility. If we get to the high state one day, the next day will feature high volatility again with probability 0.7, whereas we have a 0.3 probability of moving back to the low volatility state. Formally, we have the following conditional probabilities that represent transition probabilities:
Fig. 8.7 A simple regime switching model.
Since we deal with discrete states with a qualitative nature, we deal with conditional probabilities rather than conditional expectations of numerical variables, but this is a two-state, discrete-time Markov chain.
Now, a good question is: If we get to the high state, what is the expected number of days that we will spend in that state? The answer can be found by referring back to the geometric distribution. Doing so, it is easy to see that the expected “sojourn” time in the high volatility state is
Arguably, the values above are not quite realistic. Furthermore, according to this model, the number of days we spent in each state has no influence in the future evolution. The only relevant piece of information is the last visited state. In fact, the geometric random variable plays, in the context of discrete distributions, the same role that the exponential plays in the context of the continuous ones; both are memory less distributions. Indeed, in continuous-time Markov chains, the sojourn time in each state is exponentially distributed. The applicability of a memory less distribution to model a real-life case must be carefully and critically evaluated. Nevertheless, they are so easy to deal with that it is often better to build a complex model with multiple states, approximating a more realistic distribution, than coming up with a more realistic, but intractable model.
Problems
8.1 You have to decide how much ice cream to buy in order to meet demand at two retail stores. Demand is modeled as follows:
where X, , and are independent normal variables with expected value and variance given by (28,16), (200,100), (300,150), respectively. Random variable X can be regarded as a common risk factor linked to temperature (the two retail stores are close enough that their sales levels are influenced by the same temperature value), whereas the other variables are specific factors, possibly related to competition in the zone of each store, as well as to pure random variability. Ice cream is stored at a central warehouse, which is close to the two retail stores, so that whatever is needed can be immediately transported by a van.
- Find how much ice cream you should order, in such a way that service level at the warehouse level will be 95%.
- Would this quantity increase or decrease in case of positive correlation between and ?
8.2 You are in charge of component inventory control. Your firm produces end items P1 and P2, which share a common component C. You need two components C for each piece of type P1 and three components for each piece of type P2. Over the next time period, demand is uncertain and modeled by a normal distribution. Demand for item P1 has expected value 1000 and standard deviation 250; the corresponding values for P2 are 700 and 180. Assuming that the two demands are independent, determine the desired inventory level for component C in such a way that its service level is 92%.
8.3 You have invested $10,000 in IFM stock shares and $20,000 in Peculiar Motors stock shares. Compute the one-day value at risk, at 95% level, assuming normally distributed daily returns. Daily volatility is 2% for IFM and 4% for Peculiar Motors, and their correlation is 0.68.
8.4 Consider two random variables X and Y, not necessarily independent. Prove that Cov(X − Y, X + Y) = 0.
Leave a Reply