Grubbs's test for outliers


In statistics, Grubbs's test or the Grubbs test, also known as the maximum normalized residual test or extreme studentized deviate test, is a test used to detect outliers in a univariate data set assumed to come from a normally distributed population.

Definition

Grubbs's test is based on the assumption of normality. That is, one should first verify that the data can be reasonably approximated by a normal distribution before applying the Grubbs test.
Grubbs's test detects one outlier at a time. This outlier is expunged from the dataset and the test is iterated until no outliers are detected. However, multiple iterations change the probabilities of detection, and the test should not be used for sample sizes of six or fewer since it frequently tags most of the points as outliers.
Grubbs's test is defined for the hypothesis:
The Grubbs test statistic is defined as:
with and denoting the sample mean and standard deviation, respectively. The Grubbs test statistic is the largest absolute deviation from the sample mean in units of the sample standard deviation.
This is the two-sided version of the test. The Grubbs test can also be defined as a one-sided test. To test whether the minimum value is an outlier, the test statistic is
with Ymin denoting the minimum value. To test whether the maximum value is an outlier, the test statistic is
with Ymax denoting the maximum value.
For the two-sided test, the hypothesis of no outliers is rejected at significance level α if
with tα/,N−2 denoting the upper critical value of the t-distribution with N − 2 degrees of freedom and a significance level of α/. For the one-sided tests, replace α/ with α/N.

Related techniques

Several graphical techniques can, and should, be used to detect outliers. A simple run sequence plot, a box plot, or a histogram should show any obviously outlying points. A normal probability plot may also be useful.