MINQUE

From HandWiki
Short description: Theory in the field of statistics

In statistics, the theory of minimum norm quadratic unbiased estimation (MINQUE)[1][2][3] was developed by C. R. Rao. MINQUE is a theory alongside other estimation methods in estimation theory, such as the method of moments or maximum likelihood estimation. Similar to the theory of best linear unbiased estimation, MINQUE is specifically concerned with linear regression models.[1] The method was originally conceived to estimate heteroscedastic error variance in multiple linear regression.[1] MINQUE estimators also provide an alternative to maximum likelihood estimators or restricted maximum likelihood estimators for variance components in mixed effects models.[3] MINQUE estimators are quadratic forms of the response variable and are used to estimate a linear function of the variances.

Principles

We are concerned with a mixed effects model for the random vector 𝐘n with the following linear structure.

𝐘=𝐗β+𝐔1ξ1++𝐔kξk

Here, 𝐗n×m is a design matrix for the fixed effects, βm represents the unknown fixed-effect parameters, 𝐔in×ci is a design matrix for the i-th random-effect component, and ξiciis a random vector for the i-th random-effect component. The random effects are assumed to have zero mean (𝔼[ξi]=𝟎) and be uncorrelated (𝕍[ξi]=σi2𝐈ci). Furthermore, any two random effect vectors are also uncorrelated (𝕍[ξi,ξj]=𝟎ij). The unknown variances σ12,,σk2 represent the variance components of the model.

This is a general model that captures commonly used linear regression models.

  1. Gauss-Markov Model[3]: If we consider a one-component model where 𝐔1=𝐈n, then the model is equivalent to the Gauss-Markov model 𝐘=𝐗β+ϵ with 𝔼[ϵ]=𝟎 and 𝕍[ϵ]=σ12𝐈n.
  2. Heteroscedastic Model[1]: Each set of random variables in 𝐘 that shares a common variance can be modeled as an individual variance component with an appropriate 𝐔i.

A compact representation for the model is the following, where 𝐔=[𝐔1𝐔k] and ξ=[ξ1ξk].

𝐘=𝐗β+𝐔ξ

Note that this model makes no distributional assumptions about 𝐘 other than the first and second moments.[3]

𝔼[𝐘]=𝐗β

𝕍[𝐘]=σ12𝐔1𝐔1++σk2𝐔k𝐔kσ12𝐕1++σk2𝐕k

The goal in MINQUE is to estimate θ=i=1kpiσi2 using a quadratic form θ^=𝐘𝐀𝐘. MINQUE estimators are derived by identifying a matrix 𝐀 such that the estimator has some desirable properties,[2][3] described below.

Optimal Estimator Properties to Constrain MINQUE

Invariance to translation of the fixed effects

Consider a new fixed-effect parameter γ=ββ0, which represents a translation of the original fixed effect. The new, equivalent model is now the following.

𝐘𝐗β0=𝐗γ+𝐔ξ

Under this equivalent model, the MINQUE estimator is now (𝐘𝐗β0)𝐀(𝐘𝐗β0). Rao argued that since the underlying models are equivalent, this estimator should be equal to 𝐘𝐀𝐘.[2][3] This can be achieved by constraining 𝐀 such that 𝐀𝐗=𝟎, which ensures that all terms other than 𝐘𝐀𝐘 in the expansion of the quadratic form are zero.

Unbiased estimation

Suppose that we constrain 𝐀𝐗=𝟎, as argued in the section above. Then, the MINQUE estimator has the following form

θ^=𝐘𝐀𝐘=(𝐗β+𝐔ξ)𝐀(𝐗β+𝐔ξ)=ξ𝐔𝐀𝐔ξ

To ensure that this estimator is unbiased, the expectation of the estimator 𝔼[θ^] must equal the parameter of interest, θ. Below, the expectation of the estimator can be decomposed for each component since the components are uncorrelated with each other. Furthermore, the cyclic property of the trace is used to evaluate the expectation with respect to ξi.

𝔼[θ^]=𝔼[ξ𝐔𝐀𝐔ξ]=i=1k𝔼[ξi𝐔i𝐀𝐔iξi]=i=1kσi2Tr[𝐔i𝐀𝐔i]

To ensure that this estimator is unbiased, Rao suggested setting i=1kσi2Tr[𝐔i𝐀𝐔i]=i=1kpiσi2, which can be accomplished by constraining 𝐀 such that Tr[𝐔i𝐀𝐔i]=Tr[𝐀𝐕i]=pi for all components.[3]

Minimum Norm

Rao argues that if ξ were observed, a "natural" estimator for θ would be the following[2][3] since 𝔼[ξiξi]=ciσi2. Here, Δ is defined as a diagonal matrix.

p1c1ξ1ξ1++pkckξkξk=ξ[diag(p1ci,,pkck)]ξξΔξ

The difference between the proposed estimator and the natural estimator is ξ(𝐔𝐀𝐔Δ)ξ. This difference can be minimized by minimizing the norm of the matrix 𝐔𝐀𝐔Δ.

Procedure

Given the constraints and optimization strategy derived from the optimal properties above, the MINQUE estimator θ^ for θ=i=1kpiσi2 is derived by choosing a matrix 𝐀 that minimizes 𝐔𝐀𝐔Δ, subject to the constraints

  1. 𝐀𝐗=𝟎, and
  2. Tr[𝐀𝐕i]=pi.

Examples of Estimators

Standard Estimator for Homoscedastic Error

In the Gauss-Markov model, the error variance σ2 is estimated using the following.

s2=1nm(𝐘𝐗β^)(𝐘𝐗β^)

This estimator is unbiased and can be shown to minimize the Euclidean norm of the form 𝐔𝐀𝐔Δ.[1] Thus, the standard estimator for error variance in the Gauss-Markov model is a MINQUE estimator.

Random Variables with Common Mean and Heteroscedastic Error

For random variables Y1,,Yn with a common mean and different variances σ12,,σn2, the MINQUE estimator for σi2 is nn2(YiY)2s2n2, where Y=1ni=1nYi and s2=1n1i=1n(YiY)2.[1]

Estimator for Variance Components

Rao proposed a MINQUE estimator for the variance components model based on minimizing the Euclidean norm.[2] The Euclidean norm 2 is the square root of the sum of squares of all elements in the matrix. When evaluating this norm below, 𝐕=𝐕1++𝐕k=𝐔𝐔. Furthermore, using the cyclic property of traces, Tr[𝐔𝐀𝐔Δ]=Tr[𝐀𝐔Δ𝐔]=Tr[i=1kpici𝐀𝐕i]=Tr[ΔΔ].

𝐔𝐀𝐔Δ22=(𝐔𝐀𝐔Δ)(𝐔𝐀𝐔Δ)=Tr[𝐔𝐀𝐔𝐔𝐀𝐔]Tr[2𝐔𝐀𝐔Δ]+Tr[ΔΔ]=Tr[𝐀𝐕𝐀𝐕]Tr[ΔΔ]

Note that since Tr[ΔΔ] does not depend on 𝐀, the MINQUE with the Euclidean norm is obtained by identifying the matrix 𝐀 that minimizes Tr[𝐀𝐕𝐀𝐕], subject to the MINQUE constraints discussed above.

Rao showed that the matrix 𝐀 that satisfies this optimization problem is

𝐀=i=1kλi𝐑𝐕i𝐑,

where 𝐑=𝐕1(𝐈𝐏), 𝐏=𝐗(𝐗𝐕1𝐗)𝐗𝐕1 is the projection matrix into the column space of 𝐗, and () represents the generalized inverse of a matrix.

Therefore, the MINQUE estimator is the following, where the vectors λ and 𝐐 are defined based on the sum.

θ^=𝐘𝐀𝐘=i=1kλi𝐘𝐑𝐕i𝐑𝐘i=1kλiQiλ𝐐

The vector λ is obtained by using the constraint Tr[𝐀𝐕i]=pi. That is, the vector represents the solution to the following system of equations j{1,,k}.

Tr[𝐀𝐕j]=pjTr[i=1kλi𝐑𝐕i𝐑𝐕j]=pji=1kλiTr[𝐑𝐕i𝐑𝐕j]=pj

This can be written as a matrix product 𝐒λ=𝐩, where 𝐩=[p1pk] and 𝐒 is the following.

𝐒=[Tr[𝐑𝐕1𝐑𝐕1]Tr[𝐑𝐕k𝐑𝐕1]Tr[𝐑𝐕1𝐑𝐕k]Tr[𝐑𝐕k𝐑𝐕k]]

Then, λ=𝐒𝐩. This implies that the MINQUE is θ^=λ𝐐=𝐩(𝐒)𝐐=𝐩𝐒𝐐. Note that θ=i=1kpiσi2=𝐩σ, where σ=[σ12σk2]. Therefore, the estimator for the variance components is σ^=𝐒𝐐.

Extensions

MINQUE estimators can be obtained without the invariance criteria, in which case the estimator is only unbiased and minimizes the norm.[2] Such estimators have slightly different constraints on the minimization problem.

The model can be extended to estimate covariance components.[3] In such a model, the random effects of a component are assumed to have a common covariance structure 𝕍[ξi]=Σ. A MINQUE estimator for a mixture of variance and covariance components was also proposed.[3] In this model, 𝕍[ξi]=Σ for i{1,,s} and 𝕍[ξi]=σi2𝐈ci for i{s+1,,k}.


References

  1. 1.0 1.1 1.2 1.3 1.4 1.5 Rao, C.R. (1970). "Estimation of heteroscedastic variances in linear models". Journal of the American Statistical Association 65 (329): 161–172. doi:10.1080/01621459.1970.10481070. 
  2. 2.0 2.1 2.2 2.3 2.4 2.5 Rao, C.R. (1971). "Estimation of variance and covariance components MINQUE theory". J Multivar Anal 1: 257–275. doi:10.1016/0047-259x(71)90001-7. 
  3. 3.00 3.01 3.02 3.03 3.04 3.05 3.06 3.07 3.08 3.09 Rao, C.R. (1972). "Estimation of variance and covariance components in linear models". Journal of the American Statistical Association 67 (337): 112–115. doi:10.1080/01621459.1972.10481212.