Class Web Page



2. ARIMA fitting


ARIMA(p,d,q) model

Your observation \(Y_t\) is modeled as ARIMA(p,d,q) if:

Take difference \(d\) times. Call it \(X_t\). \[ \bigtriangledown^d Y_t \hspace{3mm} = \hspace{3mm} X_t \]

Then model \(X_t\) with ARMA(p,q) model: \[ \Phi(B) \, X_t \hspace{3mm} = \hspace{3mm} \Theta(B) \, \epsilon_t \] where \(\epsilon_t\sim WN(0,\sigma^2)\).

ARIMA(p,d,q) model means you difference the data \(d\) times, then you get ARMA(\(p,q\)).



3. ARIMA(p,d,q) Modeling Workflow

Let \(Y\) be the original time series.

Hyndman-Khandakar algorithm for automatic ARIMA modelling in The number of differences \(d\) is determined using repeated KPSS tests.

For value of \(d\) selected by auto.arima(), take the difference by hand, and use Stationarity.tests() to see what other tests say.

Preliminary Analysis

Is \(Y\) stationary? Use Stationarity.tests(). If all three points to stationarity, then set \(d=0\) and perform ARMA(p,q) analysis.

If Stationarity.tests(Y) indicates non-stationarity, but Stationarity.tests( diff(Y) ) indicates to stationarity, then it is a clear \(d=1\) case.

If test results are mixed, then use all values of \(d\) that is plausible. For example, if Stationarity.tests( Y ) has mixed answer, and Stationarity.tests( diff(Y) ) indicates stationarity, then perform both \(d=0\) analysis and \(d=1\) analysis.

You can take difference again by diff(diff(Y)).

(1) d=0 Direct ARMA(p,q)

Take no difference. Directly fit ARMA(p,q) with d=0. This is same as ARMA analysis.

This model is assuming that \(Y\) is stationary.

(2) d=1 ARIMA with random trend

Take difference once then fit ARMA(p,q). Use

  auto.arima(Y, d=1, stepwise=FALSE, approximation=FALSE)

This model assumes \(Y\) is not stationary, and either have random trend without drift, random trend with drift, or linear trend.

If no drift term is suggested auto.arima(), include it using Arima(), and check for significance.

  Arima(Y, order=c(p,1,q), include.drift=TRUE)


(3) d=0 Linear Trend + ARMA(p,q)

This model is assuming that \(Y\) is not stationary, with linear deterministic trend.

Do this if \(Y\) appear to have a linear trend.

For thie model to be valid, (2) must have significant drift term

If this model is true, then residuals after regression must return stationary series.

  Reg1 = lm(Y ~ time(Y))               # fit a line to Y
  Stationarity.tests(Reg1$residuals)   # test stationarity of regression residuals

  # fit ARMA to the regression residuals
  Fit_Reg1 = auto.arima(Reg1$residuals, d=0, stepwise=FALSE, approximation=FALSE)
  Fit_Reg2 = Arima(Reg1$residuals, order=c(p,0,q))
  
  # Fit a line to Y, then fit ARMA to the regression residuals at once. 
  Fit1 = auto.arima(Y, d=0, xreg=time(Y), stepwise=FALSE, approximation=FALSE)
  Fit2 = Arima(Y, order=c(p,0,q), xreg=time(Y))
  Randomness.tests(Fit1$residuals)



4. Signs of over- and under-differencing

You CAN’T use AICc to compare between models with different \(d\).

Underdifferencing (\(d\) is too small)

Force to fit AR(1) to your series, and see if \(\phi_1\) is significantly different from 1.

If it is close to 1, it indicates that the series is underdifferenced. (current series is non-stationary.)

\(\phi_1=1\) means that we have unit-root AR.

Overdifferencing (\(d\) is too large)

Force to fit MA(1) to your sereis, and see if \(\theta_1\) is significantly different from -1.

Suppose \(\delta X_t = Y_t\) \[ Y_t \hspace{3mm} = \hspace{3mm} \epsilon_t \]

Each step has now little more change of being positive than negative. \[ P(\epsilon_t >0) = .54 \hspace{10mm} P(\epsilon_t \leq 0) = .46. \]

Take \(\bigtriangledown\) again, then you get \[ \bigtriangledown Y_t = \epsilon_t - \epsilon_{t-1} \] Now we got MA(1) with \(\theta_1 = 1\). That’s not invertible.

Suppose \[ \bigtriangledown X_t \hspace{3mm} = \hspace{3mm} Y_t\\ \\ Y_t \hspace{3mm} = \hspace{3mm} \epsilon_t - \theta_1 \epsilon_{t-1} \] If you take \(\bigtriangledown\) again, \[ \bigtriangledown Y_t \hspace{3mm} = \hspace{3mm} (\epsilon_t - \theta_1 \epsilon_{t-1}) - (\epsilon_{t-1} - \theta_1 \epsilon_{t-2}) \\ \\ \hspace{3mm} = \hspace{3mm} (\epsilon_t - (1+\theta_1) \epsilon_{t-1} + \theta_1 \epsilon_{t-2}) . \]

So \(\bigtriangledown Y_t\) is MA(2) with \[ \Theta(z) \hspace{3mm} = \hspace{3mm} 1 - (1+\theta_1) z + \theta_1 z^2 \] Root is

\[\begin{align} \frac{-b \pm \sqrt{b^2\pm 4ac}}{2a} &= \hspace{3mm} \frac{(1+\theta_1) \pm \sqrt{(1+\theta_1)^2-4\theta_1}}{ 2\theta_1} \\ \\ &= \hspace{3mm} \frac{(1+\theta_1) \pm \sqrt{1- 2 \theta_1 + \theta_1^2}}{ 2 \theta_1} \\ \\ &= \hspace{3mm} \frac{(1+\theta_1) \pm \sqrt{(1- \theta_1)^2}}{ 2 \theta_1} \\ \\ &= \hspace{3mm} \frac{2}{2 \theta_1} \hspace{3mm} \mbox{or} \hspace{3mm} \frac{2\theta_1}{2 \theta_1} \end{align}\] That’s a unit root!

Testing Unit-Root in MA(q) polynomials \(\to\) not fully resolved.

Test Unit-Root in MA(1) \(\to\) see if \(\hat\theta_1\) is significantly different from -1.

ARMA with same polynomials

Suppose you have ARMA(1,1) with \[ Y_t - .5 Y_{t-1} = \epsilon_t - .5 \epsilon_{t-1} \] Then this is

\[\begin{align} (1-.5 B) Y_t &= \hspace{3mm} (1-.5 B) \epsilon_t \\\\ &= \hspace{3mm} \epsilon_t \end{align}\]

So \(Y_t\) is just a white noise.

larain in package TSA is good example for this.

Ex: LArain data

Check for same characteristic polynomials.

  library(forecast)
  source('https://nmimoto.github.io/R/TS-00.txt')

  acf.orig = acf;  library(TSA);  acf = acf.orig

  data(larain)

  plot(larain, type="o")

  Arima(larain, order=c(1,0,1))
## Series: larain 
## ARIMA(1,0,1) with non-zero mean 
## 
## Coefficients:
##           ar1     ma1     mean
##       -0.3402  0.3006  14.8848
## s.e.   0.9121  0.9185   0.6196
## 
## sigma^2 = 48.1:  log likelihood = -384.38
## AIC=776.76   AICc=777.12   BIC=787.74

When orders of AR and MA mathces, watch out for equal value of parameters. It may indicate WN.

If MA(1) gives you \(-1\), it indicates the over-differencing.