All of above function is put into my function called Randomness.tests(). You can load that into R by copy and pasting the following line.
library(forecast)
set.seed(3563)
Y = arima.sim(list(ar = c(.6, .3) ), 100 )
plot(Y, type="o") Fit1 = Arima(Y, order=c(2,0,0), include.mean=FALSE) # fit AR(2)
Fit1## Series: Y
## ARIMA(2,0,0) with zero mean
##
## Coefficients:
## ar1 ar2
## 0.6279 0.2530
## s.e. 0.0958 0.0959
##
## sigma^2 = 1.088: log likelihood = -145.78
## AIC=297.55 AICc=297.8 BIC=305.37
res = Fit1$resid # extract the residuals
#--- This line copy and paste Basic Functions on class web page
source("https://nmimoto.github.io/R/TS-00.txt")
#- Use the function as below
res = Fit1$resid
Randomness.tests(res)## B-L test H0: the series is uncorrelated
## M-L test H0: the square of the series is uncorrelated
## J-B test H0: the series came from Normal distribution
## SD : Standard Deviation of the series
## BL15 BL20 BL25 ML15 ML20 JB SD
## [1,] 0.762 0.874 0.724 0.598 0.814 0.434 1.037
With sample ACF \(\hat \rho(h)\), \[ Q = n(n+2) \sum_{h=1}^H \frac{ \hat \rho(h)^2}{n-h} \]
Under the null hypothesis of uncorrelation, \(Q \sim \chi^2(H)\).
Use as one-sided upper tailed test.
Parameter \(H\) must be chosen by hand. Use couple of different values.
# - global annual temparature data from Shumway
Y = read.csv("https://nmimoto.github.io/datasets/gtemp.csv")
Y1 = ts(Y, start=c(1880), freq=1)
plot(Y1, type='o') X = diff(Y1)
plot(X) library(forecast)
Fit1 = Arima(X, order=c(2,0,0), include.mean=FALSE)
source('https://nmimoto.github.io/R/TS-00.txt') # load custom functions
Randomness.tests(Fit1$resid)## B-L test H0: the series is uncorrelated
## M-L test H0: the square of the series is uncorrelated
## J-B test H0: the series came from Normal distribution
## SD : Standard Deviation of the series
## BL15 BL20 BL25 ML15 ML20 JB SD
## [1,] 0.009 0.022 0.04 0.053 0.063 0.356 0.101
set.seed(3563)
X = arima.sim(list(ar = c(.6, .3) ), 100 )
plot(X, type="o") Fit1 = Arima(X, order=c(2,0,0), include.mean=FALSE) # fit AR(2)
Fit1## Series: X
## ARIMA(2,0,0) with zero mean
##
## Coefficients:
## ar1 ar2
## 0.6279 0.2530
## s.e. 0.0958 0.0959
##
## sigma^2 = 1.088: log likelihood = -145.78
## AIC=297.55 AICc=297.8 BIC=305.37
Randomness.tests(Fit1$resid)## B-L test H0: the series is uncorrelated
## M-L test H0: the square of the series is uncorrelated
## J-B test H0: the series came from Normal distribution
## SD : Standard Deviation of the series
## BL15 BL20 BL25 ML15 ML20 JB SD
## [1,] 0.762 0.874 0.724 0.598 0.814 0.434 1.037
X = rnorm(100, 2, 4)
Box.test(res, lag=15, type="Ljung-Box")##
## Box-Ljung test
##
## data: res
## X-squared = 10.86, df = 15, p-value = 0.7624
n=100
sam.acf = acf(X)$acf Q = n*(n+2)*sum( sam.acf[2:16]^2/(n-(1:15)) )
Q = 0
for (i in 1:1000) {
X = rnorm(100, 2, 4)
sam.acf = acf(X, plot=FALSE)$acf
Q[i] = n*(n+2)*sum( sam.acf[2:16]^2/(n-(1:15)) )
}
hist(Q, freq=FALSE)
t = seq(0, 50, .1)
lines(t, dchisq(t, 15), col="red")
Instead of testing \(Y_t\), test \(Y_t^2\) for uncorrelation.
Use same test statistic as Ljung-Box, replace \(Y_t\) with \(Y_t^2\).
library(quantmod) # install.packages("quantmod")
getSymbols("AAPL")## [1] "AAPL"
head(AAPL)## AAPL.Open AAPL.High AAPL.Low AAPL.Close AAPL.Volume
## 2007-01-03 3.081786 3.092143 2.925000 2.992857 1238319600
## 2007-01-04 3.001786 3.069643 2.993571 3.059286 847260400
## 2007-01-05 3.063214 3.078571 3.014286 3.037500 834741600
## 2007-01-08 3.070000 3.090357 3.045714 3.052500 797106800
## 2007-01-09 3.087500 3.320714 3.041071 3.306071 3349298400
## 2007-01-10 3.383929 3.492857 3.337500 3.464286 2952880000
## AAPL.Adjusted
## 2007-01-03 2.562706
## 2007-01-04 2.619588
## 2007-01-05 2.600933
## 2007-01-08 2.613777
## 2007-01-09 2.830903
## 2007-01-10 2.966379
A = AAPL$AAPL.Adjusted
A1 = A["2015::2022"]
plot(A1) A1 = A["2017::2019"]
plot(A1) A2 = diff(log(A1))
plot(A2)
layout(matrix(1:2, 1, 2))
plot(A2)
plot(A2^2) acf(A2, na.action=na.pass)
acf(A2^2, na.action=na.pass) layout(1)
Box.test(A2, lag = 15, type = "Ljung-Box")##
## Box-Ljung test
##
## data: A2
## X-squared = 19.568, df = 15, p-value = 0.1891
Box.test(A2^2, lag = 15, type = "Ljung-Box")##
## Box-Ljung test
##
## data: A2^2
## X-squared = 73.398, df = 15, p-value = 1.101e-09
Randomness.tests(A2)## B-L test H0: the series is uncorrelated
## M-L test H0: the square of the series is uncorrelated
## J-B test H0: the series came from Normal distribution
## SD : Standard Deviation of the series
## BL15 BL20 BL25 ML15 ML20 JB SD
## [1,] 0.189 0.15 0.226 0 0 0 0.016
A1 = A["2020::2022"]
plot(A1) A2 = diff(log(A1))
plot(A2) Randomness.tests(A2)## B-L test H0: the series is uncorrelated
## M-L test H0: the square of the series is uncorrelated
## J-B test H0: the series came from Normal distribution
## SD : Standard Deviation of the series
## BL15 BL20 BL25 ML15 ML20 JB SD
## [1,] 0 0 0 0 0 0 0.023
Given sequence \(Y_t\), we want to test if it came from Normal distribution. (\(\mu\) and \(\sigma\) unspecified).
q-q normal plot
Jarque-Bera test
X = rnorm(100, 2, 5)
qqnorm(X)
Normal distribution has skewness 0, and kurtosis of 3. for any \(\mu\) and \(\sigma^2\). \[ \mbox{Skewness} = \frac{E[(Y_t - \mu )^3]}{ \mbox{SD}(Y_t)^{3}} \hspace10mm \mbox{ Kurtosis } = \frac{E[(Y_t - \mu )^4]}{ \mbox{SD}(Y_t)^{4}} \] \[ JB = \frac{n}{6} \Big(S^2 + \frac{1}{4}(K-3)^2\Big) \] where \(S\) is sample skewness, and \(K\) is sample kurtosis.
Asymptotically \(JB\) has \(\chi^2(2)\) distribution under the normality.
library(tseries) # load tseries package ( install.packge("tseries") if first time)
# acf(res)
# acf(res, na.action=na.pass) res = Fit1$resid
res = res[ which(is.na(res)==0) ] # remove NA from res
jarque.bera.test(res)##
## Jarque Bera Test
##
## data: res
## X-squared = 1.6684, df = 2, p-value = 0.4342
source("https://nmimoto.github.io/R/TS-00.txt")
X = rnorm(200) # random sample from N(0,1)
Randomness.tests(X)## B-L test H0: the series is uncorrelated
## M-L test H0: the square of the series is uncorrelated
## J-B test H0: the series came from Normal distribution
## SD : Standard Deviation of the series
## BL15 BL20 BL25 ML15 ML20 JB SD
## [1,] 0.461 0.51 0.57 0.212 0.315 0.688 1.031