Inverse-Wishart distribution

From HandWiki
Short description: Probability distribution
Inverse-Wishart
Notation 𝒲1(Ψ,ν)
Parameters ν>p1 degrees of freedom (real)
Ψ>0, p×p scale matrix (pos. def.)
Support 𝐗 is p × p positive definite
PDF

|Ψ|ν/22νp/2Γp(ν2)|𝐱|(ν+p+1)/2e12tr(Ψ𝐱1)

Mean Ψνp1For ν>p+1
Mode Ψν+p+1[1]:406
Variance see below

In statistics, the inverse Wishart distribution, also called the inverted Wishart distribution, is a probability distribution defined on real-valued positive-definite matrices. In Bayesian statistics it is used as the conjugate prior for the covariance matrix of a multivariate normal distribution.

We say 𝐗 follows an inverse Wishart distribution, denoted as 𝐗𝒲1(Ψ,ν), if its inverse 𝐗1 has a Wishart distribution 𝒲(Ψ1,ν). Important identities have been derived for the inverse-Wishart distribution.[2]

Density

The probability density function of the inverse Wishart is:[3]

f𝐗(𝐗;Ψ,ν)=|Ψ|ν/22νp/2Γp(ν2)|𝐗|(ν+p+1)/2e12tr(Ψ𝐗1)

where 𝐗 and Ψ are p×p positive definite matrices, || is the determinant, and Γp(·) is the multivariate gamma function.

Theorems

Distribution of the inverse of a Wishart-distributed matrix

If 𝐗𝒲(Σ,ν) and Σ is of size p×p, then 𝐀=𝐗1 has an inverse Wishart distribution 𝐀𝒲1(Σ1,ν) .[4]

Marginal and conditional distributions from an inverse Wishart-distributed matrix

Suppose 𝐀𝒲1(Ψ,ν) has an inverse Wishart distribution. Partition the matrices 𝐀 and Ψ conformably with each other

𝐀=[𝐀11𝐀12𝐀21𝐀22],Ψ=[Ψ11Ψ12Ψ21Ψ22]

where 𝐀ij and Ψij are pi×pj matrices, then we have

  1. 𝐀11 is independent of 𝐀111𝐀12 and 𝐀221, where 𝐀221=𝐀22𝐀21𝐀111𝐀12 is the Schur complement of 𝐀11 in 𝐀;
  2. 𝐀11𝒲1(Ψ11,νp2);
  3. 𝐀111𝐀12𝐀221MNp1×p2(Ψ111Ψ12,𝐀221Ψ111), where MNp×q(,) is a matrix normal distribution;
  4. 𝐀221𝒲1(Ψ221,ν), where Ψ221=Ψ22Ψ21Ψ111Ψ12;

Conjugate distribution

Suppose we wish to make inference about a covariance matrix Σ whose prior p(Σ) has a 𝒲1(Ψ,ν) distribution. If the observations 𝐗=[𝐱1,,𝐱n] are independent p-variate Gaussian variables drawn from a N(𝟎,Σ) distribution, then the conditional distribution p(Σ𝐗) has a 𝒲1(𝐀+Ψ,n+ν) distribution, where 𝐀=𝐗𝐗T.

Because the prior and posterior distributions are the same family, we say the inverse Wishart distribution is conjugate to the multivariate Gaussian.

Due to its conjugacy to the multivariate Gaussian, it is possible to marginalize out (integrate out) the Gaussian's parameter Σ, using the formula p(x)=p(x|Σ)p(Σ)p(Σ|x) and the linear algebra identity vTΩv=tr(ΩvvT):

f𝐗Ψ,ν(𝐱)=f𝐗Σ=σ(𝐱)fΣΨ,ν(σ)dσ=|Ψ|ν/2Γp(ν+n2)πnp/2|Ψ+𝐀|(ν+n)/2Γp(ν2)

(this is useful because the variance matrix Σ is not known in practice, but because Ψ is known a priori, and 𝐀 can be obtained from the data, the right hand side can be evaluated directly). The inverse-Wishart distribution as a prior can be constructed via existing transferred prior knowledge.[5]

Moments

The following is based on Press, S. J. (1982) "Applied Multivariate Analysis", 2nd ed. (Dover Publications, New York), after reparameterizing the degree of freedom to be consistent with the p.d.f. definition above.

Let W𝒲(Ψ1,ν) with νp and XW1, so that X𝒲1(Ψ,ν).

The mean:[4]:85

E(𝐗)=Ψνp1.

The variance of each element of 𝐗:

Var(xij)=(νp+1)ψij2+(νp1)ψiiψjj(νp)(νp1)2(νp3)

The variance of the diagonal uses the same formula as above with i=j, which simplifies to:

Var(xii)=2ψii2(νp1)2(νp3).

The covariance of elements of 𝐗 are given by:

Cov(xij,xk)=2ψijψk+(νp1)(ψikψj+ψiψkj)(νp)(νp1)2(νp3)


The same results are expressed in Kronecker product form by von Rosen[6] as follows:

𝐄(W1W1)=c1ΨΨ+c2Vec(Ψ)Vec(Ψ)T+c2KppΨΨ𝐂𝐨𝐯(W1,W1)=(c1c3)ΨΨ+c2Vec(Ψ)Vec(Ψ)T+c2KppΨΨ

where

c2=[(νp)(νp1)(νp3)]1c1=(νp2)c2c3=(νp1)2,
Kpp is a p2×p2 commutation matrix
𝐂𝐨𝐯(W1,W1)=𝐄(W1W1)𝐄(W1)𝐄(W1).

There appears to be a typo in the paper whereby the coefficient of KppΨΨ is given as c1 rather than c2, and that the expression for the mean square inverse Wishart, corollary 3.1, should read

𝐄[W1W1]=(c1+c2)Σ1Σ1+c2Σ1𝐭𝐫(Σ1).

To show how the interacting terms become sparse when the covariance is diagonal, let Ψ=𝐈3×3 and introduce some arbitrary parameters u,v,w:

𝐄(W1W1)=uΨΨ+vvec(Ψ)vec(Ψ)T+wKppΨΨ.

where vec denotes the matrix vectorization operator. Then the second moment matrix becomes

𝐄(W1W1)=[u+v+wvvuwuwwuvu+v+wvuwwuwuvvu+v+w]

which is non-zero only when involving the correlations of diagonal elements of W1, all other elements are mutually uncorrelated, though not necessarily statistically independent. The variances of the Wishart product are also obtained by Cook et al.[7] in the singular case and, by extension, to the full rank case.

Muirhead[8] shows in Theorem 3.2.5 that if A is distributed as 𝒲m(n,Σ) and V is a random vector, independent of A, then VTΣ1VVTA1Vχnm+12 and it follows that VTA1VVTΣ1V follows an Inverse-chi-squared distribution. Setting V=(1,0,,0)T the marginal distribution of the leading diagonal element is thus

[A1]1,1[Σ1]1,1Inv-χ2(nm+1)=2k/2Γ(k/2)xk/21e1/(2x),k=nm+1

and by rotating V end-around a similar result applies to all diagonal elements [A1]i,i.

A corresponding result in the complex Wishart case was shown by Brennan and Reed[9] and the uncorrelated inverse complex Wishart W𝒞1(𝐈,ν,p) was shown by Shaman[10] to have diagonal statistical structure in which the leading diagonal elements are correlated, while all other element are uncorrelated.

p(xα,β)=βαxα1exp(β/x)Γ1(α).
i.e., the inverse-gamma distribution, where Γ1() is the ordinary Gamma function.
  • The Inverse Wishart distribution is a special case of the inverse matrix gamma distribution when the shape parameter α=ν2 and the scale parameter β=2.
  • Another generalization has been termed the generalized inverse Wishart distribution, 𝒢𝒲1. A p×p positive definite matrix 𝐗 is said to be distributed as 𝒢𝒲1(Ψ,ν,𝐒) if 𝐘=𝐗1/2𝐒1𝐗1/2 is distributed as 𝒲1(Ψ,ν). Here 𝐗1/2 denotes the symmetric matrix square root of 𝐗, the parameters Ψ,𝐒 are p×p positive definite matrices, and the parameter ν is a positive scalar larger than 2p. Note that when 𝐒 is equal to an identity matrix, 𝒢𝒲1(Ψ,ν,𝐒)=𝒲1(Ψ,ν). This generalized inverse Wishart distribution has been applied to estimating the distributions of multivariate autoregressive processes.[11]
  • A different type of generalization is the normal-inverse-Wishart distribution, essentially the product of a multivariate normal distribution with an inverse Wishart distribution.
  • When the scale matrix is an identity matrix, Ψ=𝐈, and Φ is an arbitrary orthogonal matrix, replacement of 𝐗 by Φ𝐗ΦT does not change the pdf of 𝐗 so 𝒲1(𝐈,ν,p) belongs to the family of spherically invariant random processes (SIRPs) in some sense.[clarification needed]
Thus, an arbitrary p-vector V with l2 length VTV=1 can be rotated into the vector ΦV=[100]T without changing the pdf of VT𝐗V, moreover Φ can be a permutation matrix which exchanges diagonal elements. It follows that the diagonal elements of 𝐗 are identically inverse chi squared distributed, with pdf fx11 in the previous section though they are not mutually independent. The result is known in optimal portfolio statistics, as in Theorem 2 Corollary 1 of Bodnar et al,[12] where it is expressed in the inverse form VTΨVVT𝐗Vχνp+12.

See also

References

  1. A. O'Hagan, and J. J. Forster (2004). Kendall's Advanced Theory of Statistics: Bayesian Inference. 2B (2 ed.). Arnold. ISBN 978-0-340-80752-1. 
  2. Haff, LR (1979). "An identity for the Wishart distribution with applications". Journal of Multivariate Analysis 9 (4): 531–544. doi:10.1016/0047-259x(79)90056-3. 
  3. Gelman, Andrew; Carlin, John B.; Stern, Hal S.; Dunson, David B.; Vehtari, Aki; Rubin, Donald B. (2013-11-01) (in en). Bayesian Data Analysis, Third Edition (3rd ed.). Boca Raton: Chapman and Hall/CRC. ISBN 9781439840955. 
  4. 4.0 4.1 Kanti V. Mardia, J. T. Kent and J. M. Bibby (1979). Multivariate Analysis. Academic Press. ISBN 978-0-12-471250-8. 
  5. Shahrokh Esfahani, Mohammad; Dougherty, Edward (2014). "Incorporation of Biological Pathway Knowledge in the Construction of Priors for Optimal Bayesian Classification". IEEE Transactions on Bioinformatics and Computational Biology 11 (1): 202–218. doi:10.1109/tcbb.2013.143. PMID 26355519. 
  6. Rosen, Dietrich von (1988). "Moments for the Inverted Wishart Distribution". Scand. J. Stat. 15: 97–109. 
  7. Cook, R D; Forzani, Liliana (August 2019). "On the mean and variance of the generalized inverse of a singular Wishart matrix". Electronic Journal of Statistics 5. doi:10.4324/9780429344633. ISBN 9780429344633. https://www.researchgate.net/publication/254211710. 
  8. Muirhead, Robb (1982) (in English). Aspects of Multivariate Statistical Theory. USA: Wiley. pp. 98. ISBN 0-471-76985-1. 
  9. Brennan, L E; Reed, I S (January 1982). "An Adaptive Array Signal Processing Algorithm for Communications". IEEE Transactions on Aerospace and Electronic Systems 18 (1): 120–130. doi:10.1109/TAES.1982.309212. Bibcode1982ITAES..18..124B. 
  10. Shaman, Paul (1980). "The Inverted Complex Wishart Distribution and Its Application to Spectral Estimation". Journal of Multivariate Analysis 10: 51–59. doi:10.1016/0047-259X(80)90081-0. https://core.ac.uk/download/pdf/82734186.pdf. 
  11. Triantafyllopoulos, K. (2011). "Real-time covariance estimation for the local level model". Journal of Time Series Analysis 32 (2): 93–107. doi:10.1111/j.1467-9892.2010.00686.x. 
  12. Bodnar, T.; Mazur, S.; Podgórski, K. (January 2015). "Singular Inverse Wishart Distribution with Application to Portfolio Theory". Department of Statistics, Lund University (Working Papers in Statistics; Nr. 2): 1–17. https://journals.lub.lu.se/stat/article/download/15033/13602/.