Class Web Page



1. Covariance

\[ \begin{aligned} \mbox{Cov}(X,Y) &= E\Big[\big(X-E(X)\big)\big(Y-E(Y)\big)\Big] \\\\ &= E[XY] - E(X)E(Y) \end{aligned} \]

Correlation

Correlation defined as: \[ \rho = \frac{\mbox{Cov}(X,Y) }{ \sigma_X \sigma_Y} \]

This will ensure \(-1 \leq \rho \leq 1\).

Follows from Cauchy-Schwartz inequality, \[ E\big[ \big| (X-\mu_X)(Y-\mu_Y) \big| \big] \leq \sqrt{ E\big[(X-\mu_X)^2\big] \, E\big[(Y-\mu_Y)^2\big] } \]

Difference

##           [,1]      [,2]
## [1,] 1.1585022 0.7897293
## [2,] 0.7897293 1.1826087
##           [,1]      [,2]
## [1,] 1.0000000 0.6746978
## [2,] 0.6746978 1.0000000


Properties of Covariance

  • \(\mbox{Cov}(X,X) = \mbox{Var}(X)\)
  • \(\mbox{Cov}(X,Y)=\mbox{Cov}(Y,X)\)
  • If \(X,Y,S,T\) are all random variables, \[ \mbox{Cov}(X+Y,S+T) = \mbox{Cov}(X,S) + \mbox{Cov}(X,T) + \mbox{Cov}(Y,S) + \mbox{Cov}(Y,T) \]
  • If \(a\) is a constant, \(\mbox{Cov}(aX,Y) = a\mbox{Cov}(X,Y)\)
  • \(\mbox{Cov}(a,Y) = 0\)

  • Example: \(Var(X+Y)\)

Properties of Correlation

  • \(\rho\) is unitless.
  • \(\rho\) is invariant under linear transformation. \[ \mbox{Corr}(aX+b,Y) = \frac{ \mbox{Cov}(aX+b,Y) }{ \sigma_{aX+b} \, \sigma_Y} = \frac{ a \mbox{Cov}(X,Y) }{ a \, \sigma_{X} \, \sigma_Y} = \mbox{Corr}(X,Y) \]
  • No correlation does not mean independence.

2. White Noise and iid Normal


IID Normal

\[ \epsilon_i ~ N(0,\sigma) \] It means that each observation is independent of one another. Independence gurantees no correlation (theoretically).
Each observation is from the normal distribution.
Same as random sample from normal distribution.

White Noize

\[ \epsilon_i ~ WN(0,\sigma) \] It means that the series is Uncorrelated. This is weaker assumption than independence, because no correlation does not imply independence.
Also, it says nothing about the distribution.