F-distribution

From HandWiki
Short description: Continuous probability distribution


Fisher–Snedecor
Probability density function
Cumulative distribution function
Parameters d1, d2 > 0 deg. of freedom
Support x(0,+) if d1=1, otherwise x[0,+)
PDF (d1x)d1d2d2(d1x+d2)d1+d2xB(d12,d22)
CDF Id1xd1x+d2(d12,d22)
Mean d2d22
for d2 > 2
Mode d12d1d2d2+2
for d1 > 2
Variance 2d22(d1+d22)d1(d22)2(d24)
for d2 > 4
Skewness (2d1+d22)8(d24)(d26)d1(d1+d22)
for d2 > 6
Kurtosis see text
Entropy lnΓ(d12)+lnΓ(d22)lnΓ(d1+d22)+
(1d12)ψ(1+d12)(1+d22)ψ(1+d22)
+(d1+d22)ψ(d1+d22)+lnd1d2[1]
MGF does not exist, raw moments defined in text and in [2][3]
CF see text

In probability theory and statistics, the F-distribution or F-ratio, also known as Snedecor's F distribution or the Fisher–Snedecor distribution (after Ronald Fisher and George W. Snedecor), is a continuous probability distribution that arises frequently as the null distribution of a test statistic, most notably in the analysis of variance (ANOVA) and other F-tests.[2][3][4][5]

Definition

The F-distribution with d1 and d2 degrees of freedom is the distribution of

X=S1/d1S2/d2

where S1 and S2 are independent random variables with chi-square distributions with respective degrees of freedom d1 and d2.

It can be shown to follow that the probability density function (pdf) for X is given by

f(x;d1,d2)=(d1x)d1d2d2(d1x+d2)d1+d2xB(d12,d22)=1B(d12,d22)(d1d2)d12xd121(1+d1d2x)d1+d22

for real x > 0. Here B is the beta function. In many applications, the parameters d1 and d2 are positive integers, but the distribution is well-defined for positive real values of these parameters.

The cumulative distribution function is

F(x;d1,d2)=Id1x/(d1x+d2)(d12,d22),

where I is the regularized incomplete beta function.

The expectation, variance, and other details about the F(d1, d2) are given in the sidebox; for d2 > 8, the excess kurtosis is

γ2=12d1(5d222)(d1+d22)+(d24)(d22)2d1(d26)(d28)(d1+d22).

The k-th moment of an F(d1, d2) distribution exists and is finite only when 2k < d2 and it is equal to

μX(k)=(d2d1)kΓ(d12+k)Γ(d12)Γ(d22k)Γ(d22).[6]

The F-distribution is a particular parametrization of the beta prime distribution, which is also called the beta distribution of the second kind.

The characteristic function is listed incorrectly in many standard references (e.g.,[3]). The correct expression [7] is

φd1,d2F(s)=Γ(d1+d22)Γ(d22)U(d12,1d22,d2d1ıs)

where U(a, b, z) is the confluent hypergeometric function of the second kind.

Characterization

A random variate of the F-distribution with parameters d1 and d2 arises as the ratio of two appropriately scaled chi-squared variates:[8]

X=U1/d1U2/d2

where

In instances where the F-distribution is used, for example in the analysis of variance, independence of U1 and U2 might be demonstrated by applying Cochran's theorem.

Equivalently, the random variable of the F-distribution may also be written

X=s12σ12÷s22σ22,

where s12=S12d1 and s22=S22d2, S12 is the sum of squares of d1 random variables from normal distribution N(0,σ12) and S22 is the sum of squares of d2 random variables from normal distribution N(0,σ22). [discuss][citation needed]

In a frequentist context, a scaled F-distribution therefore gives the probability p(s12/s22σ12,σ22), with the F-distribution itself, without any scaling, applying where σ12 is being taken equal to σ22. This is the context in which the F-distribution most generally appears in F-tests: where the null hypothesis is that two independent normal variances are equal, and the observed sums of some appropriately selected squares are then examined to see whether their ratio is significantly incompatible with this null hypothesis.

The quantity X has the same distribution in Bayesian statistics, if an uninformative rescaling-invariant Jeffreys prior is taken for the prior probabilities of σ12 and σ22.[9] In this context, a scaled F-distribution thus gives the posterior probability p(σ22/σ12s12,s22), where the observed sums s12 and s22 are now taken as known.

  • If Xχd12 and Yχd22 (Chi squared distribution) are independent, then X/d1Y/d2F(d1,d2)
  • If XkΓ(αk,βk) (Gamma distribution) are independent, then α2β1X1α1β2X2F(2α1,2α2)
  • If XBeta(d1/2,d2/2) (Beta distribution) then d2Xd1(1X)F(d1,d2)
  • Equivalently, if XF(d1,d2), then d1X/d21+d1X/d2Beta(d1/2,d2/2).
  • If XF(d1,d2), then d1d2X has a beta prime distribution: d1d2Xβ(d12,d22).
  • If XF(d1,d2) then Y=limd2d1X has the chi-squared distribution χd12
  • F(d1,d2) is equivalent to the scaled Hotelling's T-squared distribution d2d1(d1+d21)T2(d1,d1+d21).
  • If XF(d1,d2) then X1F(d2,d1).
  • If Xt(n)Student's t-distribution — then: X2F(1,n)X2F(n,1)
  • F-distribution is a special case of type 6 Pearson distribution
  • If X and Y are independent, with X,Y Laplace(μ, b) then |Xμ||Yμ|F(2,2)
  • If XF(n,m) then logX2FisherZ(n,m) (Fisher's z-distribution)
  • The noncentral F-distribution simplifies to the F-distribution if λ=0.
  • The doubly noncentral F-distribution simplifies to the F-distribution if λ1=λ2=0
  • If QX(p) is the quantile p for XF(d1,d2) and QY(1p) is the quantile 1p for YF(d2,d1), then QX(p)=1QY(1p).
  • F-distribution is an instance of ratio distributions
  • W-distribution[10] is a unique parametrization of F-distribution.

See also


References

  1. Lazo, A.V.; Rathie, P. (1978). "On the entropy of continuous probability distributions". IEEE Transactions on Information Theory (IEEE) 24 (1): 120–122. doi:10.1109/tit.1978.1055832. 
  2. 2.0 2.1 Johnson, Norman Lloyd; Samuel Kotz; N. Balakrishnan (1995). Continuous Univariate Distributions, Volume 2 (Second Edition, Section 27). Wiley. ISBN 0-471-58494-0. 
  3. 3.0 3.1 3.2 Abramowitz, Milton; Stegun, Irene Ann, eds (1983). "Chapter 26". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. pp. 946. LCCN 65-12253. ISBN 978-0-486-61272-0. http://www.math.sfu.ca/~cbm/aands/page_946.htm. 
  4. NIST (2006). Engineering Statistics Handbook – F Distribution
  5. Mood, Alexander; Franklin A. Graybill; Duane C. Boes (1974). Introduction to the Theory of Statistics (Third ed.). McGraw-Hill. pp. 246–249. ISBN 0-07-042864-6. 
  6. Taboga, Marco. "The F distribution". http://www.statlect.com/F_distribution.htm. 
  7. Phillips, P. C. B. (1982) "The true characteristic function of the F distribution," Biometrika, 69: 261–264 JSTOR 2335882
  8. M.H. DeGroot (1986), Probability and Statistics (2nd Ed), Addison-Wesley. ISBN:0-201-11366-X, p. 500
  9. G. E. P. Box and G. C. Tiao (1973), Bayesian Inference in Statistical Analysis, Addison-Wesley. p. 110
  10. Mahmoudi, Amin; Javed, Saad Ahmed (October 2022). "Probabilistic Approach to Multi-Stage Supplier Evaluation: Confidence Level Measurement in Ordinal Priority Approach" (in en). Group Decision and Negotiation 31 (5): 1051–1096. doi:10.1007/s10726-022-09790-1. ISSN 0926-2644. PMID 36042813. 
  11. Sun, Jingchao; Kong, Maiying; Pal, Subhadip (22 June 2021). "The Modified-Half-Normal distribution: Properties and an efficient sampling scheme". Communications in Statistics - Theory and Methods 52 (5): 1591–1613. doi:10.1080/03610926.2021.1934700. ISSN 0361-0926. https://figshare.com/articles/journal_contribution/The_Modified-Half-Normal_distribution_Properties_and_an_efficient_sampling_scheme/14825266/1/files/28535884.pdf.