Chebyshev's inequality


In probability theory, Chebyshev's inequality guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more than a certain distance from the mean. Specifically, no more than 1/k2 of the distribution's values can be more than k standard deviations away from the mean. The rule is often called Chebyshev's theorem, about the range of standard deviations around the mean, in statistics. The inequality has great utility because it can be applied to any probability distribution in which the mean and variance are defined. For example, it can be used to prove the weak law of large numbers.
In practical usage, in contrast to the 68–95–99.7 rule, which applies to normal distributions, Chebyshev's inequality is weaker, stating that a minimum of just 75% of values must lie within two standard deviations of the mean and 89% within three standard deviations.
The term Chebyshev's inequality may also refer to Markov's inequality, especially in the context of analysis. They are closely related, and some authors refer to Markov's inequality as "Chebyshev's First Inequality," and the similar one referred to on this page as "Chebyshev's Second Inequality."

History

The theorem is named after Russian mathematician Pafnuty Chebyshev, although it was first formulated by his friend and colleague Irénée-Jules Bienaymé. The theorem was first stated without proof by Bienaymé in 1853 and later proved by Chebyshev in 1867. His student Andrey Markov provided another proof in his 1884 Ph.D. thesis.

Statement

Chebyshev's inequality is usually stated for random variables, but can be generalized to a statement about measure spaces.

Probabilistic statement

Let X be a random variable with finite expected value μ and finite non-zero variance σ2. Then for any real number,
Only the case is useful. When the right-hand side and the inequality is trivial as all probabilities are ≤ 1.
As an example, using shows that the probability that values lie outside the interval does not exceed.
Because it can be applied to completely arbitrary distributions provided they have a known finite mean and variance, the inequality generally gives a poor bound compared to what might be deduced if more aspects are known about the distribution involved.
kMin. % within k standard
deviations of mean
Max. % beyond k standard
deviations from mean
10%100%
50%50%
1.555.56%44.44%
275%25%
287.5%12.5%
388.8889%11.1111%
493.75%6.25%
596%4%
697.2222%2.7778%
797.9592%2.0408%
898.4375%1.5625%
998.7654%1.2346%
1099%1%

Measure-theoretic statement

Let be a measure space, and let f be an extended real-valued measurable function defined on X. Then for any real number t > 0 and 0 < p < ∞,
More generally, if g is an extended real-valued measurable function, nonnegative and nondecreasing on the range of f, then
The previous statement then follows by defining as if and otherwise.

Example

Suppose we randomly select a journal article from a source with an average of 1000 words per article, with a standard deviation of 200 words. We can then infer that the probability that it has between 600 and 1400 words must be at least 75%, because there is no more than chance to be outside that range, by Chebyshev's inequality. But if we additionally know that the distribution is normal, we can say there is a 75% chance the word count is between 770 and 1230.

Sharpness of bounds

As shown in the example above, the theorem typically provides rather loose bounds. However, these bounds cannot in general be improved upon. The bounds are sharp for the following example: for any k ≥ 1,
For this distribution, the mean μ = 0 and the standard deviation σ =, so
Chebyshev's inequality is an equality for precisely those distributions that are a linear transformation of this example.

Proof (of the two-sided version)

Probabilistic proof

states that for any real-valued random variable Y and any positive number a, we have Pr ≤ E/a. One way to prove Chebyshev's inequality is to apply Markov's inequality to the random variable with a = 2.
It can also be proved directly using conditional expectation:
Chebyshev's inequality then follows by dividing by k2σ2.
This proof also shows why the bounds are quite loose in typical cases: the conditional expectation on the event where |X-μ|<σ is thrown away, and the lower bound of k2σ2 on the event |X-μ|≥σ can be quite poor.

Measure-theoretic proof

Fix and let be defined as, and let be the indicator function of the set . Then, it is easy to check that, for any,
since g is nondecreasing on the range of f, and therefore,
The desired inequality follows from dividing the above inequality by g.

Proof assuming random variable X is continuous

Using the definitions of probability density function f, and variance Var:
we have:
Replacing kσ with ε, where k=ε/σ, we have another form of the Chebyshev's inequality:
or, the equivalent
where ε is defined the same way as k; any positive real number.

Extensions

Several extensions of Chebyshev's inequality have been developed.

Asymmetric two-sided

If X has mean and variance, then
This reduces to Chebyshev's inequality in the symmetric case.

Bivariate generalization

Let be two random variables with means and finite variances respectively. Then a union bound shows that
This bound does not require and independent.

Bivariate, known correlation

Berge derived an inequality for two correlated variables. Let be the correlation coefficient between X1 and X2 and let σi2 be the variance of. Then
Lal later obtained an alternative bound
Isii derived a further generalisation. Let
and define:
There are now three cases.
The general case is known as the Birnbaum–Raymond–Zuckerman inequality after the authors who proved it for two dimensions.
where is the -th random variable, is the -th mean and σi2 is the -th variance.
If the variables are independent this inequality can be sharpened.
Olkin and Pratt derived an inequality for correlated variables.
where the sum is taken over the n variables and
where is the correlation between and.
Olkin and Pratt's inequality was subsequently generalised by Godwin.

Finite-dimensional vector

Ferentinos has shown that for a vector with mean, standard deviation σ = and the Euclidean norm that
A second related inequality has also been derived by Chen. Let be the dimension of the stochastic vector and let be the mean of. Let be the covariance matrix and. Then
where YT is the transpose of. A simple proof was obtained in Navarro as follows:
where
and is a symmetric invertible matrix such that:. Hence and where represents the identity matrix of dimension n. Then and
Finally, by applying Markov's inequality to Z we get
and so the desired inequality holds.
The inequality can be written in terms of the Mahalanobis distance as
where the Mahalanobis distance based on S is defined by
Navarro proved that these bounds are sharp, that is, they are the best possible bounds for that regions when we just know the mean and the covariance matrix of X.
Stellato et al. showed that this multivariate version of the Chebyshev inequality can be easily derived analytically as a special case of Vandenberghe et al. where the bound is computed by solving a semidefinite program.

Infinite dimensions

There is a straightforward extension of the vector version of Chebyshev's inequality to infinite dimensional settings. Let be a random variable which takes values in a Fréchet space . This includes most common settings of vector-valued random variables, e.g., when is a Banach space, a Hilbert space, or the finite-dimensional setting as described above.
Suppose that is of "strong order two", meaning that
for every seminorm. This is a generalization of the requirement that have finite variance, and is necessary for this strong form of Chebyshev's inequality in infinite dimensions. The terminology "strong order two" is due to Vakhania.
Let be the Pettis integral of , and let
be the standard deviation with respect to the seminorm. In this setting we can state the following:
Proof. The proof is straightforward, and essentially the same as the finitary version. If, then is constant almost surely, so the inequality is trivial.
If
then, so we may safely divide by. The crucial trick in Chebyshev's inequality is to recognize that.
The following calculations complete the proof:

Higher moments

An extension to higher moments is also possible:

Exponential moment

A related inequality sometimes known as the exponential Chebyshev's inequality is the inequality
Let be the cumulant generating function,
Taking the Legendre–Fenchel transformation of and using the exponential Chebyshev's inequality we have
This inequality may be used to obtain exponential inequalities for unbounded variables.

Bounded variables

If P has finite support based on the interval, let where |x| is the absolute value of. If the mean of P is zero then for all
The second of these inequalities with is the Chebyshev bound. The first provides a lower bound for the value of P.
Sharp bounds for a bounded variate have been proposed by Niemitalo, but without a proof though
Let where. Then

Univariate case

Saw et al extended Chebyshev's inequality to cases where the population mean and variance are not known and may not exist, but the sample mean and sample standard deviation from N samples are to be employed to bound the expected value of a new drawing from the same distribution.
where X is a random variable which we have sampled N times, m is the sample mean, k is a constant and s is the sample standard deviation. g is defined as follows:
Let x ≥ 1, Q = N + 1, and R be the greatest integer less than Q/x. Let
Now
This inequality holds even when the population moments do not exist, and when the sample is only weakly exchangeably distributed; this criterion is met for randomised sampling. A table of values for the Saw–Yang–Mo inequality for finite sample sizes has been determined by Konijn. The table allows the calculation of various confidence intervals for the mean, based on multiples, C, of the standard error of the mean as calculated from the sample. For example, Konijn shows that for N = 59, the 95 percent confidence interval for the mean m is where .
Kabán gives a somewhat less complex version of this inequality.
If the standard deviation is a multiple of the mean then a further inequality can be derived,
A table of values for the Saw–Yang–Mo inequality for finite sample sizes has been determined by Konijn.
For fixed N and large m the Saw–Yang–Mo inequality is approximately
Beasley et al have suggested a modification of this inequality
In empirical testing this modification is conservative but appears to have low statistical power. Its theoretical basis currently remains unexplored.

Dependence on sample size

The bounds these inequalities give on a finite sample are less tight than those the Chebyshev inequality gives for a distribution. To illustrate this let the sample size N = 100 and let k = 3. Chebyshev's inequality states that at most approximately 11.11% of the distribution will lie at least three standard deviations away from the mean. Kabán's version of the inequality for a finite sample states that at most approximately 12.05% of the sample lies outside these limits. The dependence of the confidence intervals on sample size is further illustrated below.
For N = 10, the 95% confidence interval is approximately ±13.5789 standard deviations.
For N = 100 the 95% confidence interval is approximately ±4.9595 standard deviations; the 99% confidence interval is approximately ±140.0 standard deviations.
For N = 500 the 95% confidence interval is approximately ±4.5574 standard deviations; the 99% confidence interval is approximately ±11.1620 standard deviations.
For N = 1000 the 95% and 99% confidence intervals are approximately ±4.5141 and approximately ±10.5330 standard deviations respectively.
The Chebyshev inequality for the distribution gives 95% and 99% confidence intervals of approximately ±4.472 standard deviations and ±10 standard deviations respectively.

Samuelson's inequality

Although Chebyshev's inequality is the best possible bound for an arbitrary distribution, this is not necessarily true for finite samples. Samuelson's inequality states that all values of a sample will lie within standard deviations of the mean. Chebyshev's bound improves as the sample size increases.
When N = 10, Samuelson's inequality states that all members of the sample lie within 3 standard deviations of the mean: in contrast Chebyshev's states that 99.5% of the sample lies within 13.5789 standard deviations of the mean.
When N = 100, Samuelson's inequality states that all members of the sample lie within approximately 9.9499 standard deviations of the mean: Chebyshev's states that 99% of the sample lies within 10 standard deviations of the mean.
When N = 500, Samuelson's inequality states that all members of the sample lie within approximately 22.3383 standard deviations of the mean: Chebyshev's states that 99% of the sample lies within 10 standard deviations of the mean.

Multivariate case

Stellato et al. simplified the notation and extended the empirical Chebyshev inequality from Saw et al. to the multivariate case. Let be a random variable and let. We draw iid samples of denoted as. Based on the first samples, we define the empirical mean as and the unbiased empirical covariance as. If is nonsingular, then for all then

Remarks

In the univariate case, i.e., this inequality corresponds to the one from Saw et al. Moreover, the right-hand side can be simplified by upper bounding the floor function by its argument
As, the right-hand side tends to which corresponds to the multivariate Chebyshev inequality over ellipsoids shaped according to and centered in.

Sharpened bounds

Chebyshev's inequality is important because of its applicability to any distribution. As a result of its generality it may not provide as sharp a bound as alternative methods that can be used if the distribution of the random variable is known. To improve the sharpness of the bounds provided by Chebyshev's inequality a number of methods have been developed; for a review see eg.

Standardised variables

Sharpened bounds can be derived by first standardising the random variable.
Let X be a random variable with finite variance Var. Let Z be the standardised form defined as
Cantelli's lemma is then
This inequality is sharp and is attained by k and −1/k with probability 1/ and k2/ respectively.
If k > 1 and the distribution of X is symmetric then we have
Equality holds if and only if Z = −k, 0 or k with probabilities, and respectively.
An extension to a two-sided inequality is also possible.
Let u, v > 0. Then we have

Semivariances

An alternative method of obtaining sharper bounds is through the use of semivariances. The upper and lower semivariances are defined as
where m is the arithmetic mean of the sample and n is the number of elements in the sample.
The variance of the sample is the sum of the two semivariances:
In terms of the lower semivariance Chebyshev's inequality can be written
Putting
Chebyshev's inequality can now be written
A similar result can also be derived for the upper semivariance.
If we put
Chebyshev's inequality can be written
Because σu2σ2, use of the semivariance sharpens the original inequality.
If the distribution is known to be symmetric, then
and
This result agrees with that derived using standardised variables.
;Note: The inequality with the lower semivariance has been found to be of use in estimating downside risk in finance and agriculture.

Selberg's inequality

Selberg derived an inequality for P when axb. To simplify the notation let
where
and
The result of this linear transformation is to make P equal to P.
The mean and variance of X are related to the mean and variance of Y:
With this notation Selberg's inequality states that
These are known to be the best possible bounds.

Cantelli's inequality

due to Francesco Paolo Cantelli states that for a real random variable with mean and variance
where a ≥ 0.
This inequality can be used to prove a one tailed variant of Chebyshev's inequality with k > 0
The bound on the one tailed variant is known to be sharp. To see this consider the random variable X that takes the values
Then E = 0 and E = σ2 and P = 1 /.

An application - distance between the mean and the median

The one-sided variant can be used to prove the proposition that for probability distributions having an expected value and a median, the mean and the median can never differ from each other by more than one standard deviation. To express this in symbols let μ, ν, and σ be respectively the mean, the median, and the standard deviation. Then
There is no need to assume that the variance is finite because this inequality is trivially true if the variance is infinite.
The proof is as follows. Setting k = 1 in the statement for the one-sided inequality gives:
Changing the sign of X and of μ, we get
As the median is by definition any real number m that satisfies the inequalities
this implies that the median lies within one standard deviation of the mean. A proof using Jensen's inequality also exists.

Bhattacharyya's inequality

Bhattacharyya extended Cantelli's inequality using the third and fourth moments of the distribution.
Let μ = 0 and σ2 be the variance. Let γ = E/σ3 and κ = E/σ4.
If k2kγ − 1 > 0 then
The necessity of k2kγ − 1 > 0 requires that k be reasonably large.

Mitzenmacher and Upfal's inequality

and Upfal note that
for any integer k > 0 and that
is the 2kth central moment. They then show that for t > 0
For k = 1 we obtain Chebyshev's inequality. For t ≥ 1, k > 2 and assuming that the kth moment exists, this bound is tighter than Chebyshev's inequality.

Related inequalities

Several other related inequalities are also known.

Zelen's inequality

Zelen has shown that
with
where is the -th moment and is the standard deviation.

He, Zhang and Zhang's inequality

For any collection of non-negative independent random variables with expectation 1

Hoeffding's lemma

Let be a random variable with and, then for any, we have

Van Zuijlen's bound

Let be a set of independent Rademacher random variables:. Then
The bound is sharp and better than that which can be derived from the normal distribution.

Unimodal distributions

A distribution function F is unimodal at ν if its cumulative distribution function is convex on and concave on An empirical distribution can be tested for unimodality with the dip test.
In 1823 Gauss showed that for a unimodal distribution with a mode of zero
If the mode is not zero and the mean and standard deviation are both finite, then denoting the median as ν and the root mean square deviation from the mode by ω, we have
and
Winkler in 1866 extended Gauss' inequality to rth moments where r > 0 and the distribution is unimodal with a mode of zero:
Gauss' bound has been subsequently sharpened and extended to apply to departures from the mean rather than the mode due to the Vysochanskiï–Petunin inequality. The latter has been extended by Dharmadhikari and Joag-Dev
where s is a constant satisfying both s > r + 1 and s = rr and r > 0.
It can be shown that these inequalities are the best possible and that further sharpening of the bounds requires that additional restrictions be placed on the distributions.

Unimodal symmetrical distributions

The bounds on this inequality can also be sharpened if the distribution is both unimodal and symmetrical. An empirical distribution can be tested for symmetry with a number of tests including McWilliam's R*. It is known that the variance of a unimodal symmetrical distribution with finite support is less than or equal to 2 / 12.
Let the distribution be supported on the finite interval and the variance be finite. Let the mode of the distribution be zero and rescale the variance to 1. Let k > 0 and assume k < 2N/3. Then
If 0 < k ≤ 2 / the bounds are reached with the density
If 2 / < k ≤ 2N / 3 the bounds are attained by the distribution
where βk = 4 / 3k2, δ0 is the Dirac delta function and where
The existence of these densities shows that the bounds are optimal. Since N is arbitrary these bounds apply to any value of N.
The Camp–Meidell's inequality is a related inequality. For an absolutely continuous unimodal and symmetrical distribution
DasGupta has shown that if the distribution is known to be normal

Effects of symmetry and unimodality

Symmetry of the distribution decreases the inequality's bounds by a factor of 2 while unimodality sharpens the bounds by a factor of 4/9.
Because the mean and the mode in a unimodal distribution differ by at most standard deviations at most 5% of a symmetrical unimodal distribution lies outside /3 standard deviations of the mean. This is sharper than the bounds provided by the Chebyshev inequality.
These bounds on the mean are less sharp than those that can be derived from symmetry of the distribution alone which shows that at most 5% of the distribution lies outside approximately 3.162 standard deviations of the mean. The Vysochanskiï–Petunin inequality further sharpens this bound by showing that for such a distribution that at most 5% of the distribution lies outside 4/3 standard deviations of the mean.

Symmetrical unimodal distributions

For any symmetrical unimodal distribution
DasGupta's inequality states that for a normal distribution at least 95% lies within approximately 2.582 standard deviations of the mean. This is less sharp than the true figure.

Bounds for specific distributions

When the mean is zero Chebyshev's inequality takes a simple form. Let σ2 be the variance. Then
With the same conditions Cantelli's inequality takes the form

Unit variance

If in addition E = 1 and E = ψ then for any 0 ≤ ε ≤ 1
The first inequality is sharp. This is known as the Paley–Zygmund inequality.
It is also known that for a random variable obeying the above conditions that
where
It is also known that
The value of C0 is optimal and the bounds are sharp if
If
then the sharp bound is

Integral Chebyshev inequality

There is a second inequality also named after Chebyshev
If f, g : → R are two monotonic functions of the same monotonicity, then
If f and g are of opposite monotonicity, then the above inequality works in the reverse way.
This inequality is related to Jensen's inequality, Kantorovich's inequality, the Hermite–Hadamard inequality and Walter's conjecture.

Other inequalities

There are also a number of other inequalities associated with Chebyshev:
One use of Chebyshev's inequality in applications is to create confidence intervals for variates with an unknown distribution. Haldane noted, using an equation derived by Kendall, that if a variate has a zero mean, unit variance and both finite skewness and kurtosis then the variate can be converted to a normally distributed standard score :
This transformation may be useful as an alternative to Chebyshev's inequality or as an adjunct to it for deriving confidence intervals for variates with unknown distributions.
While this transformation may be useful for moderately skewed and/or kurtotic distributions, it performs poorly when the distribution is markedly skewed and/or kurtotic.