[2] Platt scaling works by fitting a logistic regression model to a classifier's scores.
Consider the problem of binary classification: for inputs x, we want to determine whether they belong to one of two classes, arbitrarily labeled +1 and −1.
We assume that the classification problem will be solved by a real-valued function f, by predicting a class label y = sign(f(x)).
It produces probability estimates i.e., a logistic transformation of the classifier scores f(x), where A and B are two scalar parameters that are learned by the algorithm.
the probability estimates contain a correction compared to the old decision function y = sign(f(x)).
This transformation follows by applying Bayes' rule to a model of out-of-sample data that has a uniform prior over the labels.
[1] The constants 1 and 2, on the numerator and denominator respectively, are derived from the application of Laplace smoothing.
[4] Platt scaling has been shown to be effective for SVMs as well as other types of classification models, including boosted models and even naive Bayes classifiers, which produce distorted probability distributions.
It is particularly effective for max-margin methods such as SVMs and boosted trees, which show sigmoidal distortions in their predicted probabilities, but has less of an effect with well-calibrated models such as logistic regression, multilayer perceptrons, and random forests.
[2] Platt scaling can also be applied to deep neural network classifiers.
A 2017 paper proposed temperature scaling, which simply multiplies the output logits of a network by a constant