Chapter 9 Dynamic regression models
The time series models in the previous two chapters allow for the inclusion of information from past observations of a series, but not for the inclusion of other information that may also be relevant. For example, the effects of holidays, competitor activity, changes in the law, the wider economy, or other external variables may explain some of the historical variation and allow more accurate forecasts. On the other hand, the regression models in Chapter 5 allow for the inclusion of a lot of relevant information from predictor variables, but do not allow for the subtle time series dynamics that can be handled with ARIMA models.
In this chapter, we consider how to extend ARIMA models in order to allow other information to be included in the models. We begin by simply combining regression models and ARIMA models to give a regression with ARIMA errors. These are then extended to the general class of dynamic regression models. In Chapter 5 we considered regression models of the form yt=β0+β1x1,t+⋯+βkxk,t+et, where yt is a linear function of the k predictor variables (x1,t,…,xk,t), and et is usually assumed to be an uncorrelated error term (i.e., it is white noise). We considered tests such as the Breusch-Godfrey test for assessing whether et was significantly correlated.
In this chapter, we will allow the errors from a regression to contain autocorrelation. To emphasise this change in perspective, we will replace et with nt in the equation. The error series nt is assumed to follow an ARIMA model. For example, if nt follows an ARIMA(1,1,1) model, we can write yt=β0+β1x1,t+⋯+βkxk,t+nt,(1−ϕ1B)(1−B)nt=(1+θ1B)et, where et is a white noise series.
Notice that the model has two error terms here — the error from the regression model, which we denote by nt, and the error from the ARIMA model, which we denote by et. Only the ARIMA model errors are assumed to be white noise.