In- and out-of-sample checks

As will be clear from the following, when we want to apply certain forecasting algorithms, we might need to fit one or more parameters used to calculate a forecast. This is typically done by a proper initialization. When the algorithm depends on estimates of parameters, if we start from scratch, initial performance will be poor because the algorithm has to learn about the demand pattern first. However, we would like to ascertain how the algorithm performs in steady state, not in this initial transient phase.

If we are carrying out a historical simulation, based on a sample of past data, we may use a portion of available information to fit parameters. This portion of data that we use for fitting and initial learning is the fit sample. Arguably, the larger the fit sample, the better the initialization. However, this does not leave us with any data to test performance. In fact, it would be not quite correct to use knowledge of data to fit parameters, and then predict the very same data that we have used to initialize the algorithm. Performance evaluation should be carried out out-of-sample, i.e., predicting data that have not been used in any way for initialization purposes.5

This will be much clearer in the following, but it is important to state this principle right from the beginning. The available sample of data should be split into

  1. fit sample used for initialization
  2. test sample to evaluate performance in a realistic and sensible way

This approach is also known as data splitting, and it involves an obvious tradeoff, since a short fit sample would leave much data available for testing, but initial performance could be poor; on the other hand, a short test sample makes performance evaluation rather unreliable.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *