In statistics, completeness is a property of a statistic in relation to a model for a set of observed data. In essence, it ensures that the distributions corresponding to different values of the parameters are distinct. It is closely related to the idea of identifiability, but in statistical theory it is often found as a condition imposed on a sufficient statistic from which certain optimality results are derived.
Definition
Consider a random variableX whose probability distributionbelongs to a parametric modelPθ parametrized by θ. Say T is statistic; that is, the composition of a measurable function with a random sampleX1,...,Xn. The statistic T is said to be complete for the distribution of X if, for every measurable function g,: The statistic T is said to be boundedly complete for the distribution of X if this implication holds for every measurable function g that is also bounded.
Example 1: Bernoulli model
The Bernoulli model admits a complete statistic. Let X be a random sample of size n such that each Xi has the same Bernoulli distribution with parameter p. Let T be the number of 1s observed in the sample. T is a statistic of X which has a binomial distribution with parameters. If the parameter space for p is, then T is a complete statistic. To see this, note that Observe also that neither p nor 1 − p can be 0. Hence if and only if: On denoting p/ by r, one gets: First, observe that the range of r is the positive reals. Also, E is a polynomial in r and, therefore, can only be identical to 0 if all coefficients are 0, that is, g = 0 for all t. It is important to notice that the result that all coefficients must be 0 was obtained because of the range of r. Had the parameter space been finite and with a number of elements less than or equal ton, it might be possible to solve the linear equations in g obtained by substituting the values of r and get solutions different from 0. For example, if n = 1 and the parameter space is, a single observation and a single parameter value, T is not complete. Observe that, with the definition: then, E = 0 although g is not 0 for t = 0 nor for t = 1.
For some parametric families, a complete sufficient statistic does not exist. Also, a minimal sufficient statistic need not exist. Under mild conditions, a minimal sufficient statistic does always exist. In particular, these conditions always hold if the random variables are all discrete or are all continuous.
Importance of completeness
The notion of completeness has many applications in statistics, particularly in the following two theorems of mathematical statistics.
Completeness occurs in the Lehmann–Scheffé theorem, which states that if a statistic that is unbiased, complete and sufficient for some parameter θ, then it is the best mean-unbiased estimator for θ. In other words, this statistic has a smaller expected loss for any convexloss function; in many practical applications with the squared loss-function, it has a smaller mean squared error among any estimators with the same expected value. Examples exists that when the minimal sufficient statistic is not complete then several alternative statistics exist for unbiased estimation of θ, while some of them have lower variance than others. See alsominimum-variance unbiased estimator.
Bounded completeness occurs in Basu's theorem, which states that a statistic that is both boundedly complete and sufficient is independent of any ancillary statistic.
Bounded completeness also occurs in Bahadur's theorem. In the case where there exists at least one minimal sufficient statistic, a statistic which is sufficient and boundedly complete, is necessarily minimal sufficient.