Nao Mimoto - Dept. of Statistics : The University of Akron
TS Class Web Page – R resource page
\[\begin{align*} Y_t &= \mbox{ Stock Price (observation) } \\ \\ X_t &= \ln(Y_t) - \ln(Y_{t-1}) \hspace{5mm} : \mbox{ log-return } \end{align*}\]
library(quantmod)
source('https://nmimoto.github.io/R/TS-00.txt')
getSymbols("AAPL") #- download from Yahoo!
## [1] "AAPL"
Not normal - Heavy Tailed unconditional and conditional distribution
Uncorrelated
Squares are correlated
Clustering
Asymmetry
## B-L test H0: the series is uncorrelated
## M-L test H0: the square of the series is uncorrelated
## J-B test H0: the series came from Normal distribution
## SD : Standard Deviation of the series
## BL15 BL20 BL25 ML15 ML20 JB SD
## [1,] 0.189 0.15 0.226 0 0 0 0.016
## [1] "^GSPC"
## B-L test H0: the series is uncorrelated
## M-L test H0: the square of the series is uncorrelated
## J-B test H0: the series came from Normal distribution
## SD : Standard Deviation of the series
## BL15 BL20 BL25 ML15 ML20 JB SD
## [1,] 0 0 0 0 0 0 0.013
Engle (1985) AutoRegressive Conditionally Heteroscedastic Model
Won Nobel Prize in Economics
\[\begin{align*} Y_t &= \sigma_t e_t \hspace{10mm} e_t \sim_{iid} N(0,1) \\\\ \sigma_t^2 &= \omega+ \alpha Y_{t-1}^2 \end{align*}\]
(unconditional) Mean \[ E(Y_t) = \sigma_t E(e_t) = 0 \]
(unconditional) Variance \[ V(Y_t) = \frac{\omega}{1-\alpha} \hspace{5mm} 0<\alpha<1 \]
Conditional mean \[ E[Y_t \Big| Y_{t-1}, e_{t-1}, \ldots] \hspace{3mm} = \hspace{3mm} \sigma_t E[e_t] = 0 \]
Conditional variance \[ V[Y_t \Big| Y_{t-1}, e_{t-1}, \ldots] \hspace{3mm} = \hspace{3mm} \sigma_t^2 V[e_t] = \sigma_t^2 \]
Suppose \(Y_t\) is AR(1) series observation
(unconditional) Mean \[ E( Y_t ) = 0 \]
(unconditional) Variance \[ V( Y_t ) = \gamma(0) = (1+\phi_1^2) \sigma^2 \]
Conditonal Mean:
\[
E\Big(Y_t \hspace{2mm} \Big| \hspace{2mm} \mbox{all variables realized by yesterday. }\Big)
\]
Conditonal mean of AR(1): \(Y_t = \phi Y_{t-1} + e_t\) \[\begin{align*} E\Big(Y_t \hspace{2mm} \Big| \hspace{2mm} Y_{t-1}, Y_{t-2}, \ldots, e_{t-1}, e_{t-2}, \ldots\Big) &= E \Big(\phi Y_{t-1} + e_t \hspace{2mm} \Big| \hspace{2mm} Y_{t-1}, Y_{t-2}, \ldots, e_{t-1}, e_{t-2}, \ldots\Big)\\ \\ &= \phi Y_{t-1} + E(e_t) \hspace{2mm} = \hspace{2mm} \phi Y_{t-1}. \end{align*}\]
Conditonal variance \[\begin{align*} V\Big(Y_t \hspace{2mm} \Big| \hspace{2mm} Y_{t-1}, Y_{t-2}, \ldots, e_{t-1}, e_{t-2}, \ldots\Big) &= V\Big(\phi Y_{t-1} + e_t \hspace{2mm} \Big| \hspace{2mm} Y_{t-1}, Y_{t-2}, \ldots, e_{t-1}, e_{t-2}, \ldots\Big)\\ \\ &= V(e_t) \hspace{2mm} = \hspace{2mm} \sigma^2 \end{align*}\]
n=200
Y <- arima.sim(n = n, list(ar = c(0.8) ))
v.AR1 = (1+.8^2)*1
plot(Y, type="o", main="AR(1)", xlim=c(0,n*1.1))
abline(h=0)
abline(h=c(1,-1)*1.96*v.AR1, col="red")
lines(n+1, .8*Y[n], type="p", col="blue")
lines(c(n+1,n+1), .8*Y[n]+c(1.96,-1.96), type="p", col="red")
Suppose \(Y_t\) is MA(1) series observation
(unconditional) Mean \[ E( Y_t ) = 0 \]
(unconditional) Variance \[ V( Y_t ) = \gamma(0) = (1+\theta_1^2) \sigma^2 \]
Conditonal mean of MA(1): \(Y_t = e_t + \theta_1 e_{t-1}\) \[\begin{align*} E\Big(Y_t \hspace{2mm} \Big| \hspace{2mm} Y_{t-1}, Y_{t-2}, \ldots, e_{t-1}, e_{t-2}, \ldots\Big) &= E \Big( e_t + \theta_1 e_{t-1} \hspace{2mm} \Big| \hspace{2mm} Y_{t-1}, Y_{t-2}, \ldots, e_{t-1}, e_{t-2}, \ldots\Big) \\\\ &= E ( e_t )+ \theta_1 e_{t-1} \hspace{2mm} = \hspace{2mm} \theta_1 e_{t-1} \end{align*}\]
Note that \(e_{t-1}\) is not observable.
Conditonal variance of MA(1): \(Y_t = e_t + \theta_1 e_{t-1}\) \[\begin{align*} Var\Big(Y_t \hspace{2mm} \Big| \hspace{2mm} Y_{t-1}, Y_{t-2}, \ldots, e_{t-1}, e_{t-2}, \ldots\Big) \ &= Var\Big( e_t + \theta_1 e_{t-1} \hspace{2mm} \Big| \hspace{2mm} Y_{t-1}, Y_{t-2}, \ldots, e_{t-1}, e_{t-2}, \ldots\Big)\\ &= Var( e_t ) \hspace{2mm} = \hspace{2mm} \sigma^2. \end{align*}\]
AR(1) \[\begin{align*} \mbox{ Uncond'l } \hspace{20mm} \mbox{ cond'l } \\ E(Y_t) &= 0, \hspace{28mm} E(Y_t|\omega_{t-1}) = \phi_1 Y_{t-1}, \\ Var(Y_t) &= (1+\phi_1^2)\sigma^2 \hspace{10mm} Var(Y_t|\omega_{t-1}) = \sigma^2 \end{align*}\]
MA(1) \[\begin{align*} \mbox{ Uncond'l } \hspace{20mm} \mbox{ cond'l } \\ E(Y_t) &= 0, \hspace{28mm} E(Y_t|\omega_{t-1}) = \theta_1 e_{t-1}, \\ Var(Y_t) &= (1+\theta_1^2)\sigma^2 \hspace{10mm} Var(Y_t|\omega_{t-1}) = \sigma^2 \end{align*}\]
For ARMA(p,q) model, conditional mean changes, but conditional variance is constant.
Don’t confuse the conditional heteroscedasticity with (unconditonal) heteroscedasticity:
D <- read.csv("https://nmimoto.github.io/datasets/Gas.csv")
D1 <- ts(D[,2], start=c(1956, 1), freq=12)
plot(D1, type='o')
GARCH(1,1) model
\[\begin{align*} Y_t &= \sigma_t e_t \hspace{10mm} e_t \sim_{iid} N(0,1) \\\\ \sigma_t^2 &= \omega + \alpha Y_{t-1}^2 + \beta \sigma^2_{t-1} \end{align*}\]
Conditional Mean: \(0\)
Conditional Variance : \(\sigma_t^2\)
Daily Price of SP500 ETF (SPY) from Jan 02 2000 to Dec 31 2014
library(quantmod)
source('https://nmimoto.github.io/R/TS-00.txt')
getSymbols("SPY") #- SP500 download from Yahoo!
## [1] "SPY"
## [1] FALSE
## [1] TRUE
## B-L test H0: the series is uncorrelated
## M-L test H0: the square of the series is uncorrelated
## J-B test H0: the series came from Normal distribution
## SD : Standard Deviation of the series
## BL15 BL20 BL25 ML15 ML20 JB SD
## [1,] 0 0 0 0 0 0 0.011
# Fit GARCH model
library(fGarch)
Y <- diff( log(SPY) )[-1] # remove the first diff for NA
Fit01 <- garchFit(~ garch(1,1), data=Y, cond.dist="norm", include.mean = FALSE, trace = FALSE)
Fit01
##
## Title:
## GARCH Modelling
##
## Call:
## garchFit(formula = ~garch(1, 1), data = Y, cond.dist = "norm",
## include.mean = FALSE, trace = FALSE)
##
## Mean and Variance Equation:
## data ~ garch(1, 1)
## <environment: 0x7ff778c68390>
## [data = Y]
##
## Conditional Distribution:
## norm
##
## Coefficient(s):
## omega alpha1 beta1
## 3.7778e-06 1.7597e-01 7.8801e-01
##
## Std. Errors:
## based on Hessian
##
## Error Analysis:
## Estimate Std. Error t value Pr(>|t|)
## omega 3.778e-06 5.531e-07 6.830 8.49e-12 ***
## alpha1 1.760e-01 1.767e-02 9.959 < 2e-16 ***
## beta1 7.880e-01 1.815e-02 43.428 < 2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Log Likelihood:
## 8734.112 normalized: 3.381383
##
## Description:
## Thu Apr 9 15:38:43 2020 by user:
## Fit01@fit$par # estimated parameters
## Fit01@residuals # this is not GARCH residuals! This is same as Y.
## Fit01@sigma.t # estimated sig_t
## Fit01@residuals/Fit1@sigma.t # this is the (standardized) GARCH residuals
Fit01@fit$ics # AIC and BIC are here
## AIC BIC SIC HQIC
## -6.760443 -6.753641 -6.760445 -6.757977
sigma.upper = xts(1.96*Fit01@sigma.t, order.by=index(Y))
sigma.lower = xts(-1.96*Fit01@sigma.t, order.by=index(Y))
plot( cbind(Y, sigma.upper, sigma.lower),
col=c("black", "red", "red"), lwd=c(2,1,1) )
plot( as.numeric( Y["2018::"]), type="h", lwd=2)
lines( as.numeric(sigma.upper["2018::"]), col="red")
lines( as.numeric(sigma.lower["2018::"]), col="red")
GARCH(1,1) model
\[\begin{align*} Y_t &= \sigma_t e_t \hspace{10mm} e_t \sim_{iid} N(0,1) \\\\ \sigma_t^2 &= \omega + \alpha Y_{t-1}^2 + \beta \sigma^2_{t-1} \end{align*}\]
standardized residuals
\[\begin{align*} \hat \sigma_t^2 &= \hat \omega + \hat \alpha Y_{t-1}^2 + \hat \beta \sigma^2_{t-1} \\ \\ \hat e_t &= \frac{Yt}{\hat \sigma} \end{align*}\]
## B-L test H0: the series is uncorrelated
## M-L test H0: the square of the series is uncorrelated
## J-B test H0: the series came from Normal distribution
## SD : Standard Deviation of the series
## BL15 BL20 BL25 ML15 ML20 JB SD
## [1,] 0.403 0.272 0.063 0.463 0.74 0 0.998