Statistical model validation


In statistics, model validation is the task of confirming that the outputs of a statistical model are acceptable with respect to the real data-generating process. In other words, model validation is the task of confirming that the outputs of a statistical model have enough fidelity to the outputs of the data-generating process that the objectives of the investigation can be achieved.

Overview

Model validation can be based on two types of data: data that was used in the construction of the model and data that was not used in the construction. Validation based on the first type usually involves analyzing the goodness of fit of the model or analyzing whether the residuals seem to be random. Validation based on the second type usually involves analyzing whether the model's predictive performance deteriorates non-negligibly when applied to pertinent new data.
.
Validation based on only the first type is often inadequate. An extreme example is illustrated in Figure 1. The figure displays data that was generated via a straight line + noise. The figure also displays a curvy line, which is a polynomial chosen to fit the data perfectly. The residuals for the curvy line are all zero. Hence validation based on only the first type of data would conclude that the curvy line was a good model. Yet the curvy line is obviously a poor model: interpolation, especially between −5 and −4, would tend to be highly misleading; moreover, any substantial extrapolation would be bad.
Thus, validation is usually not based on only considering data that was used in the construction of the model; rather, validation usually also employs data that was not used in the construction. In other words, validation usually includes testing some of the model's predictions.
A model can be validated only relative to some application area. A model that is valid for one application might be invalid for some other applications. As an example, consider the curvy line in Figure 1: if the application only used inputs from the interval , then the curvy line might well be an acceptable model.

Methods for validating

When doing a validation, there are three notable causes of potential difficulty, according to the Encyclopedia of Statistical Sciences. The three causes are these: lack of data; lack of control of the input variables; uncertainty about the underlying probability distributions and correlations. The usual methods for dealing with difficulties in validation include the following: checking the assumptions made in constructing the model; examining the available data and related model outputs; applying expert judgment. Note that expert judgment commonly requires expertise in the application area.
Expert judgment can sometimes be used to assess the validity of a prediction without obtaining real data: e.g. for the curvy line in Figure 1, an expert might well be able to assess that a substantial extrapolation will be invalid. Additionally, expert judgment can be used in Turing-type tests, where experts are presented with both real data and related model outputs and then asked to distinguish between the two.
For some classes of statistical models, specialized methods of performing validation are available. As an example, if the statistical model was obtained via a regression, then specialized analyses for regression model validation exist and are generally employed.

Residual diagnostics

Residual diagnostics comprise analyses of the residuals to determine whether the residuals seem to be effectively random. Such analyses typically requires estimates of the probability distributions for the residuals. Estimates of the residuals' distributions can often be obtained by repeatedly running the model, i.e. by using repeated stochastic simulations.
If the statistical model was obtained via a regression, then regression-residual diagnostics exist and may be used; such diagnostics have been well studied.