Consider the problem of binary classification: for inputs, we want to determine whether they belong to one of two classes, arbitrarily labeled and. We assume that the classification problem will be solved by a real-valued function, by predicting a class label. For many problems, it is convenient to get a probability, i.e. a classification that not only gives an answer, but also a degree of certainty about the answer. Some classification models do not provide such a probability, or give poor probability estimates. Platt scaling is an algorithm to solve the aforementioned problem. It produces probability estimates i.e., a logistic transformation of the classifier scores, where and are two scalar parameters that are learned by the algorithm. Note that predictions can now be made according to iff ; if, the probability estimates contain a correction compared to the old decision function. The parameters and are estimated using a maximum likelihood method that optimizes on the same training set as that for the original classifier. To avoid overfitting to this set, a held-out calibration set or cross-validation can be used, but Platt additionally suggests transforming the labels to target probabilities Here, and are the number of positive and negative samples, respectively. This transformation follows by applying Bayes' rule to a model of out-of-sample data that has a uniform prior over the labels. The constants 1 and 2, on the numerator and denominator respectively, are derived from the application of Laplace Smoothing. Platt himself suggested using the Levenberg–Marquardt algorithm to optimize the parameters, but a Newton algorithm was later proposed that should be more numerically stable.
Analysis
Platt scaling has been shown to be effective for SVMs as well as other types of classification models, including boosted models and even naive Bayes classifiers, which produce distorted probability distributions. It is particularly effective for max-margin methods such as SVMs and boosted trees, which show sigmoidal distortions in their predicted probabilities, but has less of an effect with well-calibrated models such as logistic regression, multilayer perceptrons, and random forests. An alternative approach to probability calibration is to fit an isotonic regression model to an ill-calibrated probability model. This has been shown to work better than Platt scaling, in particular when enough training data is available.