Lehmann–Scheffé theorem

From HandWiki
Short description: Theorem in statistics

In statistics, the Lehmann–Scheffé theorem is a prominent statement, tying together the ideas of completeness, sufficiency, uniqueness, and best unbiased estimation.[1] The theorem states that any estimator that is unbiased for a given unknown quantity and that depends on the data only through a complete, sufficient statistic is the unique best unbiased estimator of that quantity. The Lehmann–Scheffé theorem is named after Erich Leo Lehmann and Henry Scheffé, given their two early papers.[2][3]

If T is a complete sufficient statistic for θ and E(g(T)) = τ(θ) then g(T) is the uniformly minimum-variance unbiased estimator (UMVUE) of τ(θ).

Statement

Let X=X1,X2,,Xn be a random sample from a distribution that has p.d.f (or p.m.f in the discrete case) f(x:θ) where θΩ is a parameter in the parameter space. Suppose Y=u(X) is a sufficient statistic for θ, and let {fY(y:θ):θΩ} be a complete family. If φ:E[φ(Y)]=θ then φ(Y) is the unique MVUE of θ.

Proof

By the Rao–Blackwell theorem, if Z is an unbiased estimator of θ then φ(Y):=E[ZY] defines an unbiased estimator of θ with the property that its variance is not greater than that of Z.

Now we show that this function is unique. Suppose W is another candidate MVUE estimator of θ. Then again ψ(Y):=E[WY] defines an unbiased estimator of θ with the property that its variance is not greater than that of W. Then

E[φ(Y)ψ(Y)]=0,θΩ.

Since {fY(y:θ):θΩ} is a complete family

E[φ(Y)ψ(Y)]=0φ(y)ψ(y)=0,θΩ

and therefore the function φ is the unique function of Y with variance not greater than that of any other unbiased estimator. We conclude that φ(Y) is the MVUE.

Example for when using a non-complete minimal sufficient statistic

An example of an improvable Rao–Blackwell improvement, when using a minimal sufficient statistic that is not complete, was provided by Galili and Meilijson in 2016.[4] Let X1,,Xn be a random sample from a scale-uniform distribution XU((1k)θ,(1+k)θ), with unknown mean E[X]=θ and known design parameter k(0,1). In the search for "best" possible unbiased estimators for θ, it is natural to consider X1 as an initial (crude) unbiased estimator for θ and then try to improve it. Since X1 is not a function of T=(X(1),X(n)), the minimal sufficient statistic for θ (where X(1)=miniXi and X(n)=maxiXi), it may be improved using the Rao–Blackwell theorem as follows:

θ^RB=Eθ[X1X(1),X(n)]=X(1)+X(n)2.

However, the following unbiased estimator can be shown to have lower variance:

θ^LV=1k2n1n+1+1(1k)X(1)+(1+k)X(n)2.

And in fact, it could be even further improved when using the following estimator:

θ^BAYES=n+1n[1X(1)(1+k)X(n)(1k)1(X(1)(1+k)X(n)(1k))n+11]X(n)1+k

The model is a scale model. Optimal equivariant estimators can then be derived for loss functions that are invariant.[5]

See also

References

  1. Casella, George (2001). Statistical Inference. Duxbury Press. p. 369. ISBN 978-0-534-24312-8. 
  2. "Completeness, similar regions, and unbiased estimation. I.". Sankhyā 10 (4): 305–340. 1950. doi:10.1007/978-1-4614-1412-4_23. 
  3. "Completeness, similar regions, and unbiased estimation. II.". Sankhyā 15 (3): 219–236. 1955. doi:10.1007/978-1-4614-1412-4_24. 
  4. Tal Galili; Isaac Meilijson (31 Mar 2016). "An Example of an Improvable Rao–Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator". The American Statistician 70 (1): 108–113. doi:10.1080/00031305.2015.1100683. PMID 27499547. 
  5. Taraldsen, Gunnar (2020). "Micha Mandel (2020), "The Scaled Uniform Model Revisited," The American Statistician, 74:1, 98–100: Comment". The American Statistician 74 (3): 315. doi:10.1080/00031305.2020.1769727. https://doi.org/10.1080/00031305.2020.1769727.