M-estimator
In statistics, M-estimators are a broad class of extremum estimators for which the objective function is a sample average. Both non-linear least squares and maximum likelihood estimation are special cases of M-estimators. The definition of M-estimators was motivated by robust statistics, which contributed new types of M-estimators. The statistical procedure of evaluating an M-estimator on a data set is called M-estimation.
More generally, an M-estimator may be defined to be a zero of an estimating function. This estimating function is often the derivative of another statistical function. For example, a maximum-likelihood estimate is the point where the derivative of the likelihood function with respect to the parameter is zero; thus, a maximum-likelihood estimator is a critical point of the score function. In many applications, such M-estimators can be thought of as estimating characteristics of the population.
Historical motivation
The method of least squares is a prototypical M-estimator, since the estimator is defined as a minimum of the sum of squares of the residuals.Another popular M-estimator is maximum-likelihood estimation. For a family of probability density functions f parameterized by θ, a maximum likelihood estimator of θ is computed for each set of data by maximizing the likelihood function over the parameter space . When the observations are independent and identically distributed, a ML-estimate satisfies
or, equivalently,
Maximum-likelihood estimators have optimal properties in the limit of infinitely many observations under rather general conditions, but may be biased and not the most efficient estimators for finite samples.
Definition
In 1964, Peter J. Huber proposed generalizing maximum likelihood estimation to the minimization ofwhere ρ is a function with certain properties. The solutions
are called M-estimators ; other types of robust estimators include L-estimators, R-estimators and S-estimators. Maximum likelihood estimators are thus a special case of M-estimators. With suitable rescaling, M-estimators are special cases of extremum estimators.
The function ρ, or its derivative, ψ, can be chosen in such a way to provide the estimator desirable properties when the data are truly from the assumed distribution, and 'not bad' behaviour when the data are generated from a model that is, in some sense, close to the assumed distribution.
Types
M-estimators are solutions, θ, which minimizeThis minimization can always be done directly. Often it is simpler to differentiate with respect to θ and solve for the root of the derivative. When this differentiation is possible, the M-estimator is said to be of ψ-type. Otherwise, the M-estimator is said to be of ρ-type.
In most practical cases, the M-estimators are of ψ-type.
ρ-type
For positive integer r, let and be measure spaces. is a vector of parameters. An M-estimator of ρ-type is defined through a measurable function. It maps a probability distribution on to the value that minimizesFor example, for the maximum likelihood estimator,, where.
ψ-type
If is differentiable, the computation of is usually much easier. An M-estimator of ψ-type T is defined through a measurable function. It maps a probability distribution F on to the value that solves the vector equation:For example, for the maximum likelihood estimator,, where denotes the transpose of vector u and.
Such an estimator is not necessarily an M-estimator of ρ-type, but if ρ has a continuous first derivative with respect to, then a necessary condition for an M-estimator of ψ-type to be an M-estimator of ρ-type is. The previous definitions can easily be extended to finite samples.
If the function ψ decreases to zero as, the estimator is called redescending. Such estimators have some additional desirable properties, such as complete rejection of gross outliers.
Computation
For many choices of ρ or ψ, no closed form solution exists and an iterative approach to computation is required. It is possible to use standard function optimization algorithms, such as Newton–Raphson. However, in most cases an iteratively re-weighted least squares fitting algorithm can be performed; this is typically the preferred method.For some choices of ψ, specifically, redescending functions, the solution may not be unique. The issue is particularly relevant in multivariate and regression problems. Thus, some care is needed to ensure that good starting points are chosen. Robust starting points, such as the median as an estimate of location and the median absolute deviation as a univariate estimate of scale, are common.
Concentrating parameters
In computation of M-estimators, it is sometimes useful to rewrite the objective function so that the dimension of parameters is reduced. The procedure is called “concentrating” or “profiling”. Examples in which concentrating parameters increases computation speed include seemingly unrelated regressions models.Consider the following M-estimation problem:
Assuming differentiability of the function q, M-estimator solves the first order conditions:
Now, if we can solve the second equation for γ in terms of
and, the second equation becomes:
where g is, there is some function to be found. Now, we can rewrite the original objective function solely in terms of β by inserting the function g into the place of. As a result, there is a reduction in the number of parameters.
Whether this procedure can be done depends on particular problems at hand. However, when it is possible, concentrating parameters can facilitate computation to a great degree. For example, in estimating SUR model of 6 equations with 5 explanatory variables in each equation by Maximum Likelihood, the number of parameters declines from 51 to 30.
Despite its appealing feature in computation, concentrating parameters is of limited use in deriving asymptotic properties of M-estimator. The presence of W in each summand of the objective function makes it difficult to apply the law of large numbers and the central limit theorem.
Properties
Distribution
It can be shown that M-estimators are asymptotically normally distributed. As such, Wald-type approaches to constructing confidence intervals and hypothesis tests can be used. However, since the theory is asymptotic, it will frequently be sensible to check the distribution, perhaps by examining the permutation or bootstrap distribution.Influence function
The influence function of an M-estimator of -type is proportional to its defining function.Let T be an M-estimator of ψ-type, and G be a probability distribution for which is defined. Its influence function IF is
assuming the density function exists. A proof of this property of M-estimators can be found in Huber.
Applications
M-estimators can be constructed for location parameters and scale parameters in univariate and multivariate settings, as well as being used in robust regression.Examples
Mean
Let be a set of independent, identically distributed random variables, with distribution F.If we define
we note that this is minimized when θ is the mean of the Xs. Thus the mean is an M-estimator of ρ-type, with this ρ function.
As this ρ function is continuously differentiable in θ, the mean is thus also an M-estimator of ψ-type for ψ = θ − x.
Median
For the median estimation of, instead we can define the ρ function asand similarly, the ρ function is minimized when θ is the median of the Xs.
While this ρ function is not differentiable in θ, the ψ-type M-estimator, which is the subgradient of ρ function, can be expressed as
and