Cross-covariance matrix

From HandWiki
Short description: Type of matrix in probability theory and statistics

In probability theory and statistics, a cross-covariance matrix is a matrix whose element in the i, j position is the covariance between the i-th element of a random vector and j-th element of another random vector. A random vector is a random variable with multiple dimensions. Each element of the vector is a scalar random variable. Each element has either a finite number of observed empirical values or a finite or infinite number of potential values. The potential values are specified by a theoretical joint probability distribution. Intuitively, the cross-covariance matrix generalizes the notion of covariance to multiple dimensions.

The cross-covariance matrix of two random vectors 𝐗 and 𝐘 is typically denoted by K𝐗𝐘 or Σ𝐗𝐘.

Definition

For random vectors 𝐗 and 𝐘, each containing random elements whose expected value and variance exist, the cross-covariance matrix of 𝐗 and 𝐘 is defined by[1]:p.336

K𝐗𝐘=cov(𝐗,𝐘)=def E[(𝐗μ𝐗)(𝐘μ𝐘)T]

 

 

 

 

(Eq.1)

where μ𝐗=E[𝐗] and μ𝐘=E[𝐘] are vectors containing the expected values of 𝐗 and 𝐘. The vectors 𝐗 and 𝐘 need not have the same dimension, and either might be a scalar value.

The cross-covariance matrix is the matrix whose (i,j) entry is the covariance

KXiYj=cov[Xi,Yj]=E[(XiE[Xi])(YjE[Yj])]

between the i-th element of 𝐗 and the j-th element of 𝐘. This gives the following component-wise definition of the cross-covariance matrix.

K𝐗𝐘=[E[(X1E[X1])(Y1E[Y1])]E[(X1E[X1])(Y2E[Y2])]E[(X1E[X1])(YnE[Yn])]E[(X2E[X2])(Y1E[Y1])]E[(X2E[X2])(Y2E[Y2])]E[(X2E[X2])(YnE[Yn])]E[(XmE[Xm])(Y1E[Y1])]E[(XmE[Xm])(Y2E[Y2])]E[(XmE[Xm])(YnE[Yn])]]

Example

For example, if 𝐗=(X1,X2,X3)T and 𝐘=(Y1,Y2)T are random vectors, then cov(𝐗,𝐘) is a 3×2 matrix whose (i,j)-th entry is cov(Xi,Yj).

Properties

For the cross-covariance matrix, the following basic properties apply:[2]

  1. cov(𝐗,𝐘)=E[𝐗𝐘T]μ𝐗μ𝐘T
  2. cov(𝐗,𝐘)=cov(𝐘,𝐗)T
  3. cov(X𝟏+X𝟐,𝐘)=cov(X𝟏,𝐘)+cov(X𝟐,𝐘)
  4. cov(A𝐗+𝐚,BT𝐘+𝐛)=Acov(𝐗,𝐘)B
  5. If 𝐗 and 𝐘 are independent (or somewhat less restrictedly, if every random variable in 𝐗 is uncorrelated with every random variable in 𝐘), then cov(𝐗,𝐘)=0p×q

where 𝐗, X𝟏 and X𝟐 are random p×1 vectors, 𝐘 is a random q×1 vector, 𝐚 is a q×1 vector, 𝐛 is a p×1 vector, A and B are q×p matrices of constants, and 0p×q is a p×q matrix of zeroes.

Definition for complex random vectors

If 𝐙 and 𝐖 are complex random vectors, the definition of the cross-covariance matrix is slightly changed. Transposition is replaced by Hermitian transposition:

K𝐙𝐖=cov(𝐙,𝐖)=def E[(𝐙μ𝐙)(𝐖μ𝐖)H]

For complex random vectors, another matrix called the pseudo-cross-covariance matrix is defined as follows:

J𝐙𝐖=cov(𝐙,𝐖)=def E[(𝐙μ𝐙)(𝐖μ𝐖)T]

Uncorrelatedness

Main page: Uncorrelatedness (probability theory)

Two random vectors 𝐗 and 𝐘 are called uncorrelated if their cross-covariance matrix K𝐗𝐘 matrix is a zero matrix.[1]:p.337

Complex random vectors 𝐙 and 𝐖 are called uncorrelated if their covariance matrix and pseudo-covariance matrix is zero, i.e. if K𝐙𝐖=J𝐙𝐖=0.

References

  1. 1.0 1.1 Gubner, John A. (2006). Probability and Random Processes for Electrical and Computer Engineers. Cambridge University Press. ISBN 978-0-521-86470-1. 
  2. Taboga, Marco (2010). "Lectures on probability theory and mathematical statistics". http://www.statlect.com/varian2.htm.