Autoregressive Processes

In finite-order moving-average processes, only a finite number of past realizations of white noise influence the value of Yt. This may be a limitation for those processes in which all of the previous realizations have an effect, even though this possibly fades in time. This consideration led us from forecasting using simple moving averages to exponential smoothing. In principle, we could consider an infinite-order moving-average process, but having to do with an infinite sequence of θt−k coefficients does not sound quite practical. Luckily, under some technical conditions, such a process may be rewritten in a compact form involving time-lagged realizations of the Yt itself. This leads us to the definition of an autoregressive process of a given order. The simplest such process is the autoregressive process of order 1, AR(1):

images

Fig. 11.11 Sample path and corresponding SACF for the moving-average process images.

images

One could wonder under which conditions this process is stationary, since we cannot use the same arguments as in the moving-average case. A heuristic argument to find the expected value μ = E[Yt] is based on taking expectations and dropping the time subscript in Eq. (11.33):

images

The argument is not quite correct, as it leads to a sensible result if the process is indeed stationary, which is the case if |images| < 1. Otherwise, intuition suggests that the process will grow without bounds. The reasoning can be made precise by using the infinite-term representation of Yt, which is beyond the scope. Using the correct line of reasoning, we may also prove that

images

In particular

images
images

Fig. 11.12 Sample path and corresponding SACF for the autoregressive process images images.

and we may also observe that, for a stationary AR(1) process,

images

We notice that autocorrelation is decreasing, but it fades away with no sharp cutoff.

Example 11.10 In Figs. 11.12 and 11.13, we show a sample path and the corresponding sample autocorrelogram for the two AR(1) processes

images

respectively. Notice that the change in sign in the images coefficient has a significant effect on the sample path, as well as on autocorrelations. In the first case, autocorrelation goes to zero along a relatively smooth path.14 The sample path of the second process features evident up- and downswings; we also notice an oscillatory pattern in the autocorrelation.

The autocorrelation behavior of AR processes does not present the cutoff properties that help us determine the order of a MA process. The tool that has been developed for AR process identification is the partial autocorrelation function (PACF). The rationale behind PACF is to measure the degree of association between Yt and Yt−k, removing the effects of intermediate lags, i.e., Yt−1, …, Ytk+1 We cannot dwell too much on PACF, but we may at least get a better intuitive feeling as follows.

images

Fig. 11.13 Sample path and corresponding SACF for the autoregressive process images images.

Example 11.11 (Partial correlation) Consider three random variables XY, and Z, and imagine regressing X and Y on Z:

images

Note that we are considering a probabilistic regression, not a sample-based regression. From Section 10.6, we know that

images

Furthermore, we have regression errors

images

which may be regarded as the random variables X and Y, after the effect of Z is removed. The correlation ρ(XY) may be large because of the common factor Z (the “lurking” variable). If we want to get rid of it, we may consider the partial correlation ρ(X*, Y*)

Following the intuition provided by the example, we might consider estimating the partial autocorrelation between Yt and Yt−k by the following linear regression:

images
images

Fig. 11.14 Sample partial autocorrelation function for the autoregressive processes of Example 11.10.

By including intermediate lagged variables Yt−1, …, Yt−k+1, we capture their effect by the regression coefficients b1, …, bk−1 Then, we could use bk as an estimate of partial autocorrelation. Actually, this need not be the sounder approach, but software packages provide us with ready-to-use functions to estimate the PACF by its sample counterpart (SPACF). In Fig. 11.14 we show the SPACF for the two AR(1) processes of Example 11.10. We see that the SPACF cuts off after lag 1, even though statistical sampling errors suggest that there is a significant value at larger lags in the first case. SPACF can be used to assess the order of an AR model.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *